|
Hi there,
I am interested in reading about the prediction of schedules/calendars. However, I am pretty sure I am looking for the wrong terminology (often finding papers to do with processor instruction scheduling etc.).
An example of something I want to achieve is: a user is logging each time they exercise during the day and for how long, over time this builds up a past schedule of their exercise routine. Based on that past routine, I would like to predict what the next day's exercise may be like taking into account patterns that emerge (for example, only swimming on thursday afternoons for 1hr).
What should I be looking for? What would this area be called? Any good examples of papers in this area? Thanks for your help!
Thanks,
Ronan
|
|
|
|
|
|
Neural networks.
You define a model: any perceived relationships between inputs and outputs; you "train" the model using historical data; then have the model "predict" an outcome based on new inputs. e.g.
Test Run - Dive into Neural Networks[^]
|
|
|
|
|
Hello,
For my new project, I have to develop an algorithm called frequency sweep.
The function parameters are nFreqStart, nFreqEnd and dSweepTime'.
As an example: for nFreqStart = 800Hz, the period T of this signal is 1.25ms, for nFreqEnd = 1600Hz, T is 0.625ms and the sum of all frequencies from nFreqStart to nFreqEnd must not be higher than dSweepTime.
My first version of this function is:
uint16_t nFrequencies;
void sweep_initialize(const uint16_t nFreqStart, const uint16_t nFreqEnd, const double dSweepTime)
{
double dTotalTime = 0.0;
double dIncrement = 0.0;
uint16_t nStep = 0;
double dTmpFreq = 0.0;
bool bSweep = true;
double dInitialTime = (1.0 / nFreqStart) + (1.0 / nFreqEnd);
nFrequencies = 2;
while (bSweep == true)
{
dTotalTime = dInitialTime;
dIncrement = (double)(nFreqEnd - nFreqStart);
dIncrement /= (double)(nFrequencies);
dTmpFreq = (double)nFreqStart;
nStep = nFrequencies - 1;
for (uint16_t i = 0; i < nStep; i++)
{
dTmpFreq += dIncrement;
dTotalTime += 1.0 / dTmpFreq;
}
nFrequencies += 1;
if (dTotalTime >= dSweepTime)
{
bSweep = false;
break;
}
}
}
The next thing is, I need a counter (cpu clocks) for each frequency:
uint16_t* pSweep_Array;
void sweep_getCounter()
{
if (pSweep_Array != NULL)
{
free(pSweep_Array);
pSweep_Array = NULL;
}
pSweep_Array = (uint16_t*)malloc(sizeof(uint16_t) * nFrequencies);
if(pSweep_Array != NULL)
memset(pSweep_Array, 0, sizeof(uint16_t) * nFrequencies);
double dFrequency = (double)nFreqStart;
double dIncrement = (double)(nFreqEnd - nFreqStart);
dIncrement /= (double)(nFrequencies - 1);
double dCounter = 0.0;
for (uint16_t i = 0; i < nFrequencies; i++)
{
dCounter = 65536 - (F_CPU / dFrequency);
pSweep_Array[i] = (uint16_t)round(dCounter);
dFrequency += dIncrement;
}
}
But as a result, the computation complexity is O((n-1)! * n2), where n is the number of frequencies between nFreqStart and nFreqEnd, including nFreqStart & nFreqStop.
I need help for a less complex solution (function).
|
|
|
|
|
What I don't understand is why you do your first counter.
I guess my first question is: Why are you trying to calculate the frequencies of your CPU, but also how? You're just simply increasing an integer every time it loops?
In your first loop, your initialize, you set the nStep to one less than your frequencies? Literally all I'm seeing is that you are getting the summation of all numbers between 800 and 1600.
If all you need to do is calculate "the computation complexity... O((n-1)! * n2)" then you should do exactly that. But you're not doing that.
You're doing:
∑k=18000.001875/k
Instead just set n1 to nFreqEnd - nFreqStart and n2 to nFreqStop - nFreqStart.
Then plug it in.
I don't understand what O is, but plug that in too.
It would be helpful to know what you are returning exactly...
|
|
|
|
|
Hello, so I have to write a term paper on the subject of Root sorting. Google doesn't give me anything useful so I am here to ask someone who understands data structures and algorithms to give me some pointers on the subject. As far as I got is that I figured out it has something to do with sorting a binary tree and using the root of a binary tree. Can you please give me some insight on what algorithms should I look for or what book should I read?
|
|
|
|
|
Is that really what they said? It doesn't look like it's a thing.
|
|
|
|
|
Radix is Latin for root -> Radix Sorting!
Cheers!
"I had the right to remain silent, but I didn't have the ability!"
Ron White, Comedian
|
|
|
|
|
Excellent, now it makes sense
|
|
|
|
|
It is really called Radix Sorting[^]. Radix is Latin and means root. So it seems to me someone bungled this up and it got "lost in translation".
Cheers!
"I had the right to remain silent, but I didn't have the ability!"
Ron White, Comedian
|
|
|
|
|
You can read Corman to understand the basics of algorithm.
|
|
|
|
|
Why are you repiying to me?
Are you confused?
"I had the right to remain silent, but I didn't have the ability!"
Ron White, Comedian
|
|
|
|
|
I am doing an introductory course on algorithms. I've come across this problem which I'm unsure about. I would like to know which of the 2 are dominant
f(n): 10 log n or g(n): log(n^2)
Given the definitions of each of: Ω, Θ, O
I assumed f(n), so fn = Ω(g(n))
Reason being that log(n^2) is equivalent to 2 log n, so 10 log n is dominant.
Could somebody verify or offer an alternative?
|
|
|
|
|
Why don't you draw the graph and see which one is closer to the bottom? Basically, these are just the grading system of the algorithms (used to classify the algorithms based on their complexity), to find out which is better as compared to which is not. I personally use only Big O notation (instead of others, lower bounds, both bounds etc. instead of all of that just consider Big O, and draw the graph). Go here and do that, https://www.desmos.com/calculator[^]. The larger that a function grows with each input, the more time and space that it consumes (time and space might be relative to the data source or the algorithm being used). That is why, a better algorithm is the one that doesn't take much time. In my opinion, log (n2) is a better function — look at the slopes that each of them makes.
Secondly, Quote: which of the 2 are dominant What do you mean by dominant? The better algorithm would have a lower slope as compared to the one with a higher slope (the higher slope one is a bad algorithm depending on the data source).
The sh*t I complain about
It's like there ain't a cloud in the sky and it's raining out - Eminem
~! Firewall !~
|
|
|
|
|
thanks for the link.
I assumed that a "dominant" function is equivalent to f = Ω(g), where f is 10 log n.
So we are in agreement, since conversely g is O(f), where g is log(n^2). I was simply attempting to use a mathematical approach to prove or disprove.
|
|
|
|
|
hello all
i found this task:
"
Given an undirected graph G = (V,E), and an integer k, the cycle-elimination problem is to decide if all the cycles of the graph can be eliminated by deleting at most k edges from the graph. Either show that the problem is NP-complete, or describe a polynomial-time algorithm for it
"
do you have some ideas for the solution ?
shall i reduce the task to Feedback arc set problem ?
Thank you very much
|
|
|
|
|
|
If I want to compute/calculate A*B, then:
In russian peasant's algorithm:
declare and define new helper variable C and P for product, initialized to 0, i.e. all bits are initialized to 0.
NOW:
1. copy A to C
2. Select 1 random bit 1 (whose value/state is 1) of B and left shift C the number of bits that come in right of the selected bit, i.e. if B=0000100, then C should be left shifted 2 bits, because there are 2 bits in right of the single bit 1 of B.
The new bits of C after the left shift are set to 0.
3. Add C to P
4. Turn the selected bit to 0 and if B becomes 0, i.e. all bits are 0, then P is the product of A and B, i.e. P=A*B, otherwise/else repeat all steps, i.e. redo step 1, 2, 3, 4 and etc.
Is that correct/right? If yes, then why CPU/ALU is designed to use Booth's algorithm instead of Russian Peasant's algorithm? In some cases, Russian Peasant's algorithm can be much easy to perform than Booth's algorithm, when most bits of the multiplier B, are 0, and only few are 1.
|
|
|
|
|
That is correct (though usually you wouldn't select a random 1 but a particular 1, eg going from low to high or from high to low, you can use some additional tricks that way). Typical hardware implementations don't use this because it takes a lot of steps. They typically don't really use Booth's algorithm either, but some of the ideas are reused.
Yes it may take very few steps, but it can also take many steps (consider -1 * -1). It's bad enough already that it can take many steps, but the variation is really bad as well - throws a wrench into the usual pipelining and forwarding tricks. I suppose you might pipeline it by unrolling that entire loop, not very pleasant in terms of area, and then it has multiple exits which on top of being generally annoying means you can't do the usual thing of dispatching an instruction the cycle before its inputs will be ready and then rely on forwarding because you don't know when the multiplication ends until it does (ok you kind of could but it sucks).. it's horrible.
So other algorithms were invented, such as the Dadda multiplier and the Wallace tree. The idea (for both of them) is to first compute all pairwise single-bit products, then add them up using as many simple adders as possible (not full word adders, but the 3:2 compressor kind). That works out really well, because the first step (the single bit products) are all completely independent, and then you get a ton of product bits that you can reduce to fewer bits with independent full adders. It takes some number of layers of such adders, and the layers are dependent, but within a layer everything is parallel.
And then you still need a multi-bit adders in the end, where you'd use any fast adder design you like. You only need one of them so go crazy, unlike an unrolled russian peasant multiplication (RPM) where you'd need a metric ton of big adders (you could use ripple carry but then it sucks). So it's smaller and faster than unrolled RPM, and pipelinable unlike iterative RPM.
That's not the end though, for example you can use slightly larger modules that add up 5 bits into 3 in a particular way and build the same sort of layered tree out of those, which is a bit faster. And you can shuffle the connections of those modules around a bit so that you connect slow bits (the modules don't calculate their outputs all at the same time) to "fast inputs" (the delay through a module is not the same for every input) and vice versa to compensate the timings.
Now Booth's encoding, specifically the modified Booth encoding of radix-4 (if you go higher the circuits become annoying, not worth it), can be used to improve that even more, because you get fewer partial products .
Anyway, on a typical modern processor, multiplication takes maybe 3 to 5 times as much time as an addition. You can't match that with RPM unless there are very few bits set in the multiplicand, on average it would lose bit a huge margin.
|
|
|
|
|
Thanks for the quick and detailed answer. Now I am satisfied.
|
|
|
|
|
I am creating a DOORS database script that will be used to get all differences between 2 versions of a particular DOORS table (new vs old). I want to get all differences between the 2 tables (additions, deletions and changes). For anyone who doesn't know about DOORS, when an object is removed from a DB, the program marks the item's attribute as deleted, but you can still see it when you loop through all items. What I want to do is create a function that will display any items that have been changed since the older version.
Here is the algorithm I'm using:
for each newItem in newVers
if (newItem.deleted) {
check if deleted in oldVers
if (!oldVers.deleted) {
}
} else {
if (newItem.id exists in oldVers) {
if (newVers != oldVers) {
}
} else {
}
}
}
Does this look like the way I should do this, or is there a better way? Just to reiterate, I am only looking for what, in the newer version has changed since the older one. I am not worried about changes in the opposite direction.
Any ideas would be appreciated.
Chris
modified 26-Sep-16 9:02am.
|
|
|
|
|
Your approach is accurate, but could be slow. I'm guessing that most of the items are unchanged, so
if (newItem.id exists in oldVers) could be an expensive operation, particularly if id is not indexed.
If your database supports fast sequential access on that field, then there are algorithms that "walk" the old and new, much the way text editors do file differencing.
Cheers,
Peter
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
I have yet another issue with this function that I didn't think of originally... There are a bunch of attributes that the user can choose from to check for changes and I want to list those attributes, if they've changed, in a new column. However, since the text and heading are attributes that we always check first, I want to skip over those attributes since they will always be in the list. What is the best way to ignore those 2 attributes in the list? My first thought was to include an if-statement within the loop to look for those items. Is this the best option, or is there another way that I'm not thinking of?
Chris
|
|
|
|
|
This is part of my last post here, but I made it much smaller in question.
I'm trying to write a function that will compress items measured by inches.
So say; I have 10 of these
Length = 72
Width = 4
Height = 3
And I package 6 of them into this, leaving the next 4 to be in another package.
Length = 72
Width = 24
Height = 3
I want to further compress them by decreasing the width and increasing the height such as this. I took 2 of them and stacked them taller increasing the height.
Length = 72
Width = 12
Height = 6
I have this, which works in 1 pass, but I'm trying to figure out a way to make multiple passes until there are no more passes to make. But I can't seem to figure out a method in which to do this.
If (item.Width > item.Height) Then
If (item.Qty > 2) Then
item.Width -= p.Width * 3
item.Height += p.Height
End If
End If
|
|
|
|
|
Debug the programme.
I do believe that below line is causing that "it works only in 1 pass"
If (item.Width > item.Height) Then
|
|
|
|
|