|
My simple approach would be a greedy algorithm finding y-monotone polygons, based on an scanline/sweepline technique. By this we should meet condition a) and b). While condition c) d) e) are approximately evaluated in second state of the algorithm with help of score/ranking based on Delaunay-triangulations (q.v. voronoi hint)
Of course it's an Greedy algorithm, but it should perform well in respect that is probably np hard to find the best solution.
|
|
|
|
|
Here is how I would do it:
1) Add line segments to the extents of your workspace
2) Find the intersections of all line segments, and put them into a directed graph as vertices
3) For each line segment, add the endpoint of that line segment to the graph (if not already there)
4) Add bidirectional edges to the graph to indicate line segments that only connect adjacent intersections or endpoints
5) From each line segment endpoint, draw a fictitious segment horizontally until you hit another segment. Add the waypoint at the intersection, as well as a directed edge from the segment endpoint to the intersection, followed by a directed edge from that intersection to the next adjacent point on the line going in both directions.
6) Repeat step #5 in the vertical direction
Whew! OK, now the setup is complete. All you have to do now is the actual work:
1) Find all polygons in the graph. Do this by traversing the nodes by repeatedly taking the adjacent edge in the same direction (ie, at each vertex, find the next edge that is clockwise from the one you arrived from, and take it to the next vertex). If you arrive at a vertex with no outgoing edges, there is no polygon that met your starting criteria, so move on. Otherwise, push the resulting polygon onto a queue/stack.
2) For each polygon in your queue/stack, compute the largest inscribed rectangle. If this rectangle has a larger area than your current largest, swap them out. If they are the same, test your d criteria, followed by your e criteria, to determine which one to use.
Now you have the largest rectangle that meets your criteria (although I don't even want to imagine the asymptotic performance of this algorithm )
Sounds like somebody's got a case of the Mondays
-Jeff
|
|
|
|
|
I am engaged in research and development of a new article in a series of articles on algorithms. I plant to use the Big ‘O’ Algorithm Analyzer for .NET featured in my article here at The Code Project.
Big O Algorithm Analyzer for .NET
I recently read the MSDN Article by Dr. James McCaffrey on the QICT pair wise testing utility he wrote for. NET. In his article he encourages the reader to try to use different algorithms for weighting the pairs found by the algorithms using in the tool.
I thought this would be a great application to put under test of the Big ‘O’ tool to demonstrate how to instrument code in an application and additionally show some methods of improving the algorithms for given data sets.
Dr. James’ article can be found here:
Pairwise Testing with QICT
I would like to ask any one who would like to write a different algorithm as described in the article to post it here or discuss different approaches which seem like they would be an asset to the existing code base.
I will also demonstrate the difference of a good Knuth shuffle and a bad on assume the Perl code below:
Bad Knuth:
sub shuffle {
my @array = @_;
my $length = scalar(@array);
for (my $i = 0; $i < $length; $i++) {
my $r = int(rand() * $length);
@array[$i, $r] = @array[$r, $i];
}
return @array;
}
Good Knuth:
sub shuffle {
my @array = @_;
my $length = scalar(@array);
for (my $i = $length - 1; $i > 0; $i--) {
my $r = int(rand() * ($i + 1));
@array[$i, $r] = @array[$r, $i];
}
return @array;
}
|
|
|
|
|
Okay I'm stumped. Would anyone like to tell me why this is a bad question?
I'm not trolling my article.
I am trying to get ideas to better solve the sorting/complexity algorithm as posed in MSDN.
I got down voted twice on this post. I'd like to know why so in the future I don't make the same mistake.
|
|
|
|
|
As an independent observer, i.e. I know nothing about the QICT Knuth shuffle algorithm, I am as stumped as you are. Your previous post seems quite reasonable to me, so I can only assume the downvoting was by one or more of the many people who seem to stalk the forums downvoting for no reason. I have noticed certain people automatically downvote articles without giving a reason; it's just something we have to live with. I think a question was raised as to tracking these types but Chris commented that it would be too much effort for little gain.
I suggest you go ahead with your article and see what happens.
[edit]I gave you a 5 to try and even the score, let's hope someone else follows suit.[/edit]
|
|
|
|
|
Thanks...
Maybe down voting should require a comment w/ option to make it anonymous to protect the innocent... I will write it up as a suggestion.
I will also down load the code and place the exact algorithm in the thread to better illustrate the problem.
|
|
|
|
|
I don't see why it should be anonymous. If you think something is not up to standard then you should have the courage to put your name and comments behind it.
|
|
|
|
|
Lately I've been re-writing some graph algorithms to operate on adjacency-matrix graphs where the matrix is implemented as an array of ints/longs (for speed reasons)
Algorithms such as determining the size of a strongly connected sub-graph (somewhat like DFS, except the visiting order is different) and perfect graph coloring in chordal graphs.
They demonstrate uses of many bit hacks such as x &= (x - 1) .
Would anyone like to see an article about this?
(and is this the right place to ask this question?)
|
|
|
|
|
harold aptroot wrote: Would anyone like to see an article about this?
(and is this the right place to ask this question?)
yes, sure. and no (Article Writing, Lounge).
|
|
|
|
|
Thanks Luc, next time I'll ask it in the Lounge
|
|
|
|
|
But he's asking if anyone wants an article about this particular algorithm. Not everyone wastes time in The Lounge.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
And how about the bitwise version of Algorithm X? Is that interesting enough?
|
|
|
|
|
A float has only 24 bits that need to be divided, and one of them is "usually 1", so it requires less iterations than a full 32bit number (or even 64, which is a disaster)
Double's shouldn't really be faster than ints though (edit: ok yes it is possible in some cases, but it depends, see below), something else is going on here. Note that the time a division takes also depends on the magnitude of the result for ints, and on the roundness of the divisor for floats. This makes it harder to accurately compare the speed of integer division and floating point division.
As if that isn't bad enough yet there is also SSE vs good ol' FPU, and with Java you really have no clue which one you're getting, so I suggest you use C++ with assembly subroutines to take full control of what you're testing.
Hope this helps
edit2: forgot source, Agner Fog's optimization resources[^]
modified on Monday, December 21, 2009 11:36 AM
|
|
|
|
|
Everything - ok maybe not, but pretty much.
Dividing two doubles may take as much as 21 cycles, and as little as 6. Dividing two 32bit ints may take 18 to 42 cycles, but it's usually on the smaller end of that. Whether it is on average better to divide doubles or ints depends on what kinds of values you have.
And doing any kind of math on denormals/NaN's/infinity's can incur a penalty.
Also, for 64bit numbers idiv is lower than div (up to 30%!)
|
|
|
|
|
On most processors (the high-end of them at least) I have seen the speed for a series of numeric operations would be float >= double > int > long ; there are several reasons for this:
1.
floating-point units are more highly pipelined, which results in a greater latency but a better throughput, so for a series of FP operations, the FPU would be faster than the same operations handled by integers.
2.
integer divisions compute only a few (say 3 or 4) bits per cycle, so long could take twice as long as int.
similar for real divisions, but more bits per cycle.
3.
float operations leave integer functional units available for other things, such as array indexing, iteration counting, etc. which are most always present in code too; as those units can operate in parallel, having more of them working for you should improve overall speed.
BTW: division (and square root) is inherently sequential, much more so than addition and multiplication.
|
|
|
|
|
Floating point division is not particularly pipelined either though, the difference between the latency and the reciprocal throughput of FDIV is only 1 cycle (on a Core2)
|
|
|
|
|
Like we have FlowLayoutPanel, where controls can automatically get laid out according to their sizes and sequence, I have a requirement of a fluid layout.
In a rectangle or area (say 500w X 100h), there are some fixed small controls. lets say a rectangle control of (100w X 30h) is fixed at point (0 top , 150 left) Now, I need to fill this bigger rectangle with small shaped controls with constant areas but variable widths. Height can be maximum wherever possible.
The controls not adjacent to fixed control will be rectangles So, no problems. The controls adjacent to fixed controls will need to take polygon shape of variable heights to fit in the balance space.
Any logic already existing? Any approaches?
And yes, this is for winforms.
|
|
|
|
|
Assume that there is a vertical cylinder with a diameters as follows
1cm at base
3cm at the height of 2cm
2cm at the height of 4cm
1cm at the height of 7cm
etc etc
Now, what would be the height of different colored fluid columns one over the other (Assuming there are no other forces). The columns are on top of each other and movable by drag and drop. Hence the calculation of each column depends on the bottom column.
Its not necessary to consider volumetric analysis because the solution could be in 2D for area also.
The function should take parameters of volume/area of each column at which it starts.
This is required for a graphical application where the movement of a liquid is shown in 2D.
I am looking for a faster alternative to "for" or "while" loops. It needs to be very fast so that a fluid motion could be achieved. Current algorithm is damn slow.
my current solution looks like:
while (volumeToAdjust > 0)
{
volumeCovered = Min (volumeToAdjust, volumePossibleInCurrentWidth)
if(volumeCovered>volumePossibleInCurrentWidth)
height += heightForCurrentWidth
else
height += CalculateHeightForThisVolumeInCurrentWidth
volumeToAdjust -= volumeCovered
}
This solution works fine for a single column of fluid. Multiple columns, multiple widths, the program slows down. Any better ideas?
|
|
|
|
|
Do the diameters change or can they be hard coded?
What happens in between the values you gave? Is the diameter constant or tapered?
CQ de W5ALT
Walt Fair, Jr., P. E.
Comport Computing
Specializing in Technical Engineering Software
|
|
|
|
|
Yes diameters can be changed. Also, the volume of each fluid column and their sequence can be changed by drag and drop. That is why the need of an algorithm to do it the fastest way.
Diameters are not tapered.
|
|
|
|
|
http://www.prime-numbers.org
Tadeusz Westawic
An ounce of Clever is worth a pound of Experience.
|
|
|
|
|
that list is not complete.
|
|
|
|
|
Of course it is, they all are.
Tadeusz Westawic
An ounce of Clever is worth a pound of Experience.
|
|
|
|
|
No it isn't, just the ones less than 10 billion. As far as anyone knows, the complete list is infinite.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
Hi every body ,
I am in critical position, can u plz tell me how the plain text to encrypt and decrypt using ecc(Elliptic Curve Cryptography) in C# i need step by step process if not the complete source code i need.
can any one help me plz......
advance thanks to for reply
|
|
|
|