|
"test4" is doing a value-converting cast, while the algorithm is meant to be used with a reinterpret_cast: take the bit-pattern of the float, and use it as though it was an integer. Depending on the available commands, points may not be necessary to implement that.
Perhaps if you tell me how this FPGA will work I can work out some way to implement that algorithm on it.
I'm not sure that Goldschmidt division is right - well actually the implementation is probably right and I've seen it before, but what it's doing (calculating the fixedpoint inverse of a fixedpoint input, I think?) doesn't match its usage there. That would also explain its super high error.
|
|
|
|
|
actually I don't know anymore about it's implementation , my project manager say's something to me and I must do the desired algorithm. also I don't study anything about fpga and hardware implementations.
some thing that I know is that he wants to speed up his program by writing it in C , for example 6*6 inversion program which in it there is some 1/d where d is float(we can convert it to integer by multiplying it with 2^n) . he has wrote his code in labveiw and wants to convert it to C language. then it will be implemented on fpga.
|
|
|
|
|
Ok I don't understand anymore. So you're making code that will be implemented on an FPGA itself, in the fabric, not as a program that is run on the processor that you're turning an FPGA into? Then why not use the special tricks that work well in hardware?
|
|
|
|
|
I was using code similar to the following to calculate the sum from offset(where offset >=0) to length:
int offset = 2;
int length = 364;
float sum = 0.0f;
for(int i= offset; i<=length; i++)
{
sum += (float) i;
}
However this seemed a naïve approach so I worked out(from the algorithm for sum (0 to number) that I could use the following:
where
f=offset
n=length
sum = ((1+n) * (n/2.0)) - ((1+f) * (f/2.0) - f)
which seems to give the correct answer.
Out of curiosity what algorithm would normally be used for this kind of thing?
|
|
|
|
|
|
Your link to Wolfram Alpha is pointless.
Where is the answer you found?
|
|
|
|
|
Wolfram confirmed that the algorithm in my above question is the one to use.
To work out the algorithm, start with the algorithm for (sum i from i=0 to number) which is:
(n/2.0) * (n+1)
Then we need to subtract the offset using the same algorithm:
((n/2.0) * (n+1)) - ((f/2.0) * (f+1))
And finally account for when the offset is greater than zero by subtracting the offset:
((n/2.0) * (n+1)) - (((f/2.0) * (f+1))-f)
So the final algorithm is:
((n/2.0) * (n+1)) - (((f/2.0) * (f+1))-f)
Incidentally, the same thing applies for i^2, i^3, i^4, and etc, the algorithm just changes.
modified 18-Jun-13 19:22pm.
|
|
|
|
|
But, does it have a name. this algorithm?
|
|
|
|
|
Yes, Partial sums or Finite series.
|
|
|
|
|
C'mon, that's just the formula found by Gauss for calculating the sum from 1 to n. OK, in your case you have to substract the sum from 1 to offset from that sum (and make sure that you include/exclude the offset). But that's a simple thing.
|
|
|
|
|
Oh. I was thinking it looked more like one of Fermat theorems.
|
|
|
|
|
All the way at the bottom of wiki:interpolation_search[^] there's this text
Each iteration of the above code requires between five and six comparisons (the extra is due to the repetitions needed to distinguish the three states of < > and = via binary comparisons in the absence of a three-way comparison) plus some messy arithmetic, while the binary search algorithm can be written with one comparison per iteration and uses only trivial integer arithmetic. It would thereby search an array of a million elements with no more than twenty comparisons (involving accesses to slow memory where the array elements are stored); to beat that the interpolation search as written above would be allowed no more than three iterations. (emphasis mine)
Is that even true? If so, why?
Because that makes exactly no sense to be. It appears to be implying that 1) accessing the same array element twice means reloading it or 2) reloading the same element takes as much time as loading it the first time
Neither of which bear any resemblance to reality.
So, what's going on here? Is wikipedia just wrong? Am I missing something?
|
|
|
|
|
It seems like the interpolation search does more work per iteration, so you could do more binary search iterations in the same time.
The extra work is done in the calculation of mid:
mid = low +
((toFind - sortedArray[low]) * (high - low)) /
(sortedArray[high] - sortedArray[low]);
while the binary search is leaner: http://en.wikipedia.org/wiki/Binary_search[^]
It looks like the calculation of mid requires on the order of ten extra instructions per iteration. And one of these is a division, which is relatively slower. Using the same array element a second time won't take twice the time because the value will be stored in the cache the first time.
One factor that may slow down the binary search is the recursive call at each iteration, although it could be written iteratively.
So the Wikipedia claim could be correct.
But the number of iterations needed for the interpolation search is dependent on how uniform the distribution of numbers is. For a near-uniform distribution, you'd only need one iteration of interpolation search.
|
|
|
|
|
Alan Balkany wrote: So the Wikipedia claim could be correct. Maybe. It's certainly slower - a factor of 7 still seems highly unlikely to me. Especially considering the essentially random nature of the memory accesses and of the branches.
|
|
|
|
|
Hi guys I have a list of numbers i did algorithm to find the smallest number on that list
# include<iostream>
# include<String>
using namespace std;
int main(){
cout << "The Smallest Number on that list is: "<< SmalList( a ,size);
}
int SmalList(int a[], int size){
int xsmall=a[0];
for (int i=0; size>i; i++){
if( a[i]<xsmall){
xsmall=a[i];
}
}
return xsmall;
}
but i'm trying to find algorithm that can find two or more n smallest number on that list.
for example the list is A[]={2,4,8,3,1}
how can I find the the three smallest number from that list.
the function should return a list with three numbers
|
|
|
|
|
You could use the same as your existing loop, but have an array to hold the numbers, or sort them into order first.
Use the best guess
|
|
|
|
|
this is the question :
Write an algorithm that finds the m smallest numbers in a list of n numbers.
and I did try to answer this question with this algorithm can fix it, I know I miss something on this code below :
int getsmallest(int A[n], int m){
int smalist =[m];
small=A[0];
for(i=1;i<n;i++){
small=A[i]
}
return small;
}
|
|
|
|
|
I don't think that would even compile.
Use the best guess
|
|
|
|
|
I know it's only simple design for algorithm, I don't need to do it complete code
only steps algorithm
|
|
|
|
|
OK, but try following it through with some sample values and see what happens.
Use the best guess
|
|
|
|
|
I will tell the truth I just wrote this from the board at my class , when I got back home and opened my note no make sense for me. I know there something wrong with this algorithm so i post the question
Please any one know the answer help me with that
thank you
|
|
|
|
|
Since this is your class assignment you need to try and work it out for yourself. It's really not that difficult to write the steps to find the smallest value in a list, which is what you had in your original question.
Use the best guess
|
|
|
|
|
look man i'm not here to chat with you, if you really read , first if the question ask to find the smallest value from the list I have already answered that question read my first replay . second of all this one is not HW because the professor gave that answer i'm here because I believe that answer is wrong i'm here to work together to solve that question
just leave the question and don't waste our time.
|
|
|
|
|
demo 2 wrote: just leave the question and don't waste our time. Who's wasting whose time?
Use the best guess
|
|
|
|
|
stupid replay waste my time to read it. you spend all your time here for making stupid replay
|
|
|
|