|
Okay, here's what I have. It seems to produce reasonable results, but I'm open to suggestions for making it more efficient (also, there might be flaws in my logic that I just haven't found yet). For distributing objects horizontally:
- Give an set of objects
- Iterate through the set and find the object with the left-most edge (leftObj) and the value for its left edge (left), and the object with the right-most edge (rightObj) and the value for its right edge (right). Left and right define the limits of my space and leftObj and rightObj don't need to be moved.
- The range in which to distribute the remaining objects is the (left-most edge of rightObj) - (right-most edge of leftObj).
- Now, from my set of objects, I must remove leftObj and rightObj (since they don't move) and sort the remaining objects by their left-most edge (maybe center would be better here?)
- Iterate through my new set of objects and subtract the widths of each object from the range. What's left at the end is the space I need to distribute between the remaining objects.
- Define step as the remaining range divided by the number of object remaining plus 1.
- Define position (where to put the first object) as the right-most edge of leftObj + step.
- Iterate through the remaining objects - for each object set the left-most edge to position. Then increase position by step + the width of this object.
So there you have it. I have to iterate through my list of objects at least 3 times, and I need to sort them once. The number of objects is never likely to be very large, and execution speed on my development laptop is essentially instant, but that doesn't mean I'm not interested in how to make it more efficient purely from an academic point-of-view.
|
|
|
|
|
Sorting the objects first would avoid having to search for the leftmost and rightmost objects; after the sort they're the first and last objects. That's the only optimization I can see.
|
|
|
|
|
Well, I thought that too, but there is a problem with that. If I sort them by the left-most edge, then it's possible that the item with the highest left-most edge isn't the right-most object if there is another object that has a smaller left-most edge (i.e. is further to the left), but it wider such that it's right-most edge is actually more to the right. And the same is possible if you use right edges instead (but on the left-side instead).
So I'm not sure that there is a search criteria that will always guarantee that the first element with be the left-most and the last element will be the right-most.
{-A-} {-B-}
{---C----} {---D---}
Example: if a sort by left-edge ({), C is the left-most element, and B is the right-most. If a sort by right-edges (}), A is the left most and D is the right-most. Neither of these is correct!
|
|
|
|
|
As I suspected, the problem appears to be in the task definition.
From your description, it appears the most extreme coordinates are the most relevant. You might try sorting from the outside in:
1. Get the leftmost element, and make that the first item in your "sorted" list. (It's not a real sort, because what we have here is called a "partial order".)
2. Get the rightmost element, and make that the LAST item in the sorted list.
3. Get the second-leftmost element and make that the second item in the sorted list.
4. Get the second-rightmost element and make it the next-to-last item in the sorted list.
5. Repeat until you reach the middle item.
|
|
|
|
|
An interesting idea. I might try that.
|
|
|
|
|
I guess one place that technique might fall down would be if you had this:
....[A].......[B]..[C].....
..[..........D..........]..
Which when distributed should probably look like this:
.....[A]....[B]....[C].....
..[..........D..........]..
In this case D is both the left-most and right-most element!
|
|
|
|
|
That's a consequence of treating a partial order like a total order.
If you want to try to sort these elements, D must either appear at the left or the right. The algorithm you design should reflect your goals, and if your goals are unclear it makes finding a solution many times harder.
|
|
|
|
|
I'm not sure what it is you want. Is this a two-dimensional problem? or are these objects grouped into N rows, so it becomes N 1-dimensional problems? I suggest you clarify by providing one or two examples with initial situation and probable solution. Using PRE tags (hence non-proportional font) should work fine.
AAA...BB
AAA...BB
......BB
CC......
CC..DDDD
|
|
|
|
|
It appears to be a two-dimensional problem, where each dimension can be treated independently, so it becomes two one-dimensional problems.
|
|
|
|
|
I'm trying to implement functionality very similar to what PowerPoint does.
Example:
- Open PowerPoint
- On a blank slide, throw a bunch of boxes on the slide.
- Select several boxes.
- (In PowerPoint 2003, probably different in newer versions)Click Draw on the toolbar at the bottom, select "Align or Distribute" -> "Distribute Horizontally"
So, if you start with this (collapsing everything to one dimension for clarity):
....[A].....[B].....[C]..[D]...
You end up with this:
....[A]....[B]....[C]....[D]...
This is fairly trivial if all the boxes are the same size, but when they are different sizes, it gets a bit trickier (adding a second dimension only to make overlaps clearer - it's still a 1D problem):
.......[...B...]............[D]...
....[.A.]............[..C..]......
Should give:
..........[...B...].........[D]...
....[.A.]...........[..C..].......
|
|
|
|
|
if it is really one-dimensional, then it is trivial: the spaces should equal
( available_width - sum_of_widths_of_objects_in_row ) / ( number_of_objects_in_row - 1 )
it becomes interesting when it is actually a 2D problem, i.e. some blocks have multiple neighbours on one side, not all of the same width, as in:
AAAA......DD.....GGG
AAAA.............GGG
........CCCCCCC..GGG
BBBBBB..CCCCCCC..GGG
........CCCCCCC.....
EE..F...CCCCCCC...HH
....F.............HH
KKK.F....LLLL.....HH
|
|
|
|
|
Luc Pattyn wrote: if it is really one-dimensional, then it is trivial: the spaces should equal
( available_width - sum_of_widths_of_objects_in_row ) / ( number_of_objects_in_row - 1 )
Yes, that's what I have described several posts above. That part isn't that complicated. What I want to do is find the most efficient way to actually implement it since (as described above) you have to find the left and right most boxes (which could be the same) first and then sort the boxes remaining boxes by their current position, then iterate through the set again actually setting the positions.
I thought there might be some clever shortcut to do it that I wasn't aware of.
|
|
|
|
|
I would just sort their horizontal centers.
|
|
|
|
|
Greetings,
I'm making a little utility app to dump out test data for my forays into neural network programming (to make the training data sets). I'm trying to make it a little easier to create test data sets, so I'm building this application that allows me to create a list of input fields (along with the range of their values). From that, I want the app to spit out a file that contains all the possible combinations of values that are available (I'm assuming that the sets of values are small, and not interrelated). I've got an abstract base class called BaseDataBuilder that simply contains a description and an abstract function that returns an IEnumerable<object> that represents the set of available values for a particular input field. I'm currently only inheriting from this class in another class called BooleanDataBuilder that has the following function that defines the available values:
public override IEnumerable<object> AvailableValues()
{
yield return true;
yield return false;
}
In the future, I'd like to have the flexibility to use this with other data types, but for the moment, I'm only considering booleans.
Further up, I have an object that contains a list of BaseDataBuilder objects (currently called DataBuilder, but I'll be changing the type name soon as it isn't descriptive). This object has the following function definition.
public IEnumerable<object[]> GetDataValues()
{
}
What I want this to return is essentially a set of rows that is a combination of all the available values as defined by the list of BaseDataBuilder objects. However, I'm kind of stuck as to how to implement this. I know I could do it using recursion, but I was hoping there was some sort of LINQ-ish type of way to do this.
Anybody have any ideas?
|
|
|
|
|
For N input fields, define an N-bit binary number. As the number is incremented from all zeroes to all ones, the 1 bits in each number define all possible combinations of the N input fields. This is also called the "power set".
|
|
|
|
|
I have a list of values in single dimensional vector/array as follow:
{([point0, value0], [point1, value1], ... , [pointx, valuex]), ([pointx+1, valuex+1], [pointx+2, valuex+2], ... , [pointy, valuey]), ([pointy+1, valuey+1], [pointy+2, valuey+2], ... , [pointz, valuez])}
[at first it may look like weird how this is single dimentional array; but yes it is ]
{point0,value0,point1,value1,...,pointx,valuex}
Here i know how values are structured in an input array. I just need to implement best sorting technique for this. Requirement is to sort each block based on point value(i.e. sort point0,value0 to pointx,valuex). I have information about number of elements in each block (which will be consistent for all blocks). I can simply write something like:
<br />
for(int blockIndex = 0 ; blockIndex < totalBlocks; ++blockIndex)<br />
{<br />
for(int i = blockSize * blockIndex; i < blockSize*(blockIndex + 1); i = i + 2)<br />
{<br />
for(int j = blockSize * blockIndex; j < blockSize*(blockIndex + 1); j = j + 2)<br />
{ <br />
if (setOfValues[i] < setOfValues[j])<br />
{<br />
int temp = setOfValues[i];<br />
setOfValues[i] = setOfValues[j];<br />
setOfValues[j] = temp;<br />
<br />
temp = setOfValues[i+1];<br />
setOfValues[i+1] = setOfValues[j+1];<br />
setOfValues[j+1] = temp;<br />
}<br />
}<br />
}<br />
}
Time required for this algorithm is very huge: O(totalBlocks * blockSize^2)
I am thinking of writing this in better way. Any help would be great!
Thanks,
AksharRoop
|
|
|
|
|
That is the worst sorting algorithm I've ever seen: you have chosen a poor data representation, picked the least sophisticated algorithm, and created a poor implementation.
For a general overview on sorting algorithms, read either this[^] or Knuth's book on the subject.
If your data were represented in a normal way (say an array of structs, each struct holding two ints), you could use the built-in Sort method which exists for arrays and all kinds of collections. Specifying the sorting criterium is explained here[^].
|
|
|
|
|
I know Luc. But I have don't have other choice. I can't change the representation of data because it is sent further for some processing.
|
|
|
|
|
How big is the data set? You could change it into something more suitable before using a better sorting algorithm then change it back once sorted.
I am not sure I get what your data is supposed to look like so it's difficult to suggest a sorting algorithm to work on it in its raw state.
|
|
|
|
|
I don't think the data representation is the main problem here (unless they're large data structures) it's the choice of algorithm.
As already mentioned, it's probably the least efficient one possible...
Days spent at sea are not deducted from one's alloted span - Phoenician proverb
|
|
|
|
|
I know - I just think it be easier for him to put the data into something more usable then sort on that using an appropriate algorithm out of the box rather than fiddling with an algorithm to use his data structure.
|
|
|
|
|
Any known methods here for counting the number of 'ON' bits in a bitmap?
Tadeusz Westawic
Sum quid sum.
|
|
|
|
|
What kind of "bitmap" do you mean?
The fastest way (excluding straight table lookup of course, but that only works well if the table is in cache) to count bits in an integer (without using the popcnt instruction, which is not commonly supported) is this: http://stackoverflow.com/posts/1511920/revisions[^]
It's based on this: http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel[^]
edit: ok to clear the mess I made here up a bit..
There are many ways, including:
- popcnt: not supported by enough CPU's yet
- table lookup: only works well if you can keep the table in cache until you don't have to count any bits anymore, a cache miss is many time more terrible even than using the "standard" way (so if you have to count a lots of bits in a tight loop, go for it)
- count the bits one by one[^], works well if you expect few bits to be set (or reset - just take the complement and subtract the count from the length)
- count the bits in parallel (see links in the first part of my post) - it has no bad case, making it a safe choice. It simply uses a fixed number of steps, without needing big tables.
modified on Thursday, June 24, 2010 11:22 AM
|
|
|
|
|
Good links, thanks.
All bitmaps are mono.
I was thinking of taking the bits on a boolean swim to upper left of bm using bitblt()and then binary look for first zero row, etc. I don't have a swim algorihm though.
Does that get anyone's wheels turning?
*********START EDIT
Um, assume theoretical mono bitmaps so we avoid platform dependency and speed discussion. I can always xlate to MS at code time.
*********END EDIT
Tadeusz Westawic
Sum quid sum.
-- Modified Thursday, June 24, 2010 12:28 PM
|
|
|
|
|
Sorry, I have no idea what a swim algorithm is, google isn't being very helpful either..
|
|
|
|
|