|
I'm trying to convert a finite state machine back to a regular expression.
I'm looking for some help with one of two algorithms that i just don't have the math background to transfer to code. I've so far had no luck wrapping my head around it and it's frustrating.
Theory of Computation | Generating regular expression from finite automata - GeeksforGeeks[^]
I don't necessarily need actual code. I can deal with pseudo code. or pretty much any language (except java - for reasons having to do with the way their containers/collections work)
I'm trying to do this in C# though so some pointers would be helpful.
I know the basics of DFA and NFA machines enough to implement a regex engine. I simply cannot do this one thing.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
For a coding project I'm working on I need to predict the amount of workers that are necessary to finish a task within a limited time frame T. Lets rephrase it a bit...
Given a collection of object X; that implements a blocking function that takes object Y as a parameter; from a queue of said Y objects; that does work for anywhere between TimeX1 < Time < TimeX2 ms.
Given the fact that the pool of Y objects is vastly larger then the X objects;
And that every time an object X finishes working on an object Y it takes another object Y from said queue.
And Given that the total amount of work time of all X objects combined cannot exceed MaxTime
What formula would predict the least amount of X objects necessary to work on the whole queue of Y objects?
|
|
|
|
|
Sounds like a simple maths problem.
To guarantee that the queue will be processed in time, you have to assume the worst-case scenario - that each Y takes TimeX2 ms to process.
Therefore, the total time to process the queue will be:
TimeX2 * Count(Y) / Count(X) You want that value to be less than or equal to the MaxTime , so:
MaxTime >= TimeX2 * Count(Y) / Count(X)
Count(X) >= TimeX2 * Count(Y) / MaxTime Therefore, the minimum number of X s required will be the ceiling of the number of Y s multiplied by the maximum time to process a Y divided by the maximum time allowed.
NB: If you were willing to accept that the queue might not be fully processed within the time limit, and you had more information about the distribution of processing times, then you might be able to bring that number down a bit. You could also consider an adaptive approach, where you increase or decrease the number of X s over time based on the number of Y s waiting to be processed and the amount of time remaining.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
It would have been simple if every X could handle only one Y.
But every time an X finished doing work on a Y he takes another one.
And X is a precious resource, so I can't just go all out with X's. and these damn Y just keep coming...
|
|
|
|
|
That's already covered.
Try putting some numbers in - eg:
TimeX2 = 10MaxTime = 100Count(Y) = 20TimeX2 * Count(Y) / MaxTime = 2
If each X was only handling one Y, you'd need 20 Xs to process the queue.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
You need an average time for x; or an average time for EACH x.
You need a Y arrival rate. If you're just processing what the "night shift" produced, you still need some idea of quantity.
With Y type stacks / queues and an x type timer factory, you have a simulation.
The Master said, 'Am I indeed possessed of knowledge? I am not knowing. But if a mean person, who appears quite empty-like, ask anything of me, I set it forth from one end to the other, and exhaust it.'
― Confucian Analects
|
|
|
|
|
We know how much y's would enter the queue beforehand, once the collection of Xs start working there's no stopping unless we are past the MaxTime limit. EACH X should not work more than given MaxTime. How many Xs do I supply to work on a given amount of Ys. When each X takes another Y from the queue as soon as he finishes with a previous Y?
|
|
|
|
|
N = (Y * T1) / (X * T2) where T2 "is less than MaxTime".
The Master said, 'Am I indeed possessed of knowledge? I am not knowing. But if a mean person, who appears quite empty-like, ask anything of me, I set it forth from one end to the other, and exhaust it.'
― Confucian Analects
|
|
|
|
|
I have think out a sorting algorithm.
It's comparison count is fewer about 12% than gcc quick sort. (By 1000 item sort test 1000 times average)
If I am scholar, it is a case of dissertation.
But I'm not so. (I am smart-phone application developer.)
(I had ever made a article about previous version algorithm and submitted to Cornell University and it was refused. It said to go to forum like this.)
The characteristic is:
.Variation of merge sort, it would be classified.
.It is in-place. But different from "In-place merge sort" (Normally merge sort is not in-place.)
I think it may be fastest sorting ever. But I have now no way to publish.
Is there someone who knows the good way to announce it to the people who can evaluate it?
(The program list is a little long to include in this message.
I have not made explanation document about it. I am still testing and modifying it.)
|
|
|
|
|
Member 14560162 wrote: But I have now no way to publish.
Is there someone who knows the good way to announce it to the people who can evaluate it?
Have you considered publishing it as an article here?
Submit a new Article[^]
They are designed for code and algorithms, plus the explanation of them.
Here's a couple of mine, to give you the idea:
Using struct and class - what's that all about?[^]
List<T> - Is it really as efficient as you probably think?[^]
You'll reach up to 14,000,000 software developers, and get feedback (positive and / or negative) fairly quickly once it's moderated.
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Thank you for answer.
I had not reached article page before you indicate.
I try to write article.
But the submit will be need times because I have written nothing and I must write it between other work.
|
|
|
|
|
No rush - take your time!
Articles tabn=ke me a fair amount of time to write as well - often several times longer than the code they are based on.
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
You can mark the article as a work in progress and simply save it for further editing without having to publish in an unfinished state (which would be bad thing to do).
|
|
|
|
|
I correct some point in my question.
"Quicksort is the fastest sorting" is my misunderstanding. Merge sort is faster if comparion count is the topic.
The sorting I think out does the same operation as "Bottom-up implementation" of merge sort without using another array.
|
|
|
|
|
Hello.
I would like to ask you if anybody is able to write Boruvka and/or Prim´s algorithm in Pascal? Thank you.
|
|
|
|
|
An experienced Pascal programmer could probably do it. However, this site does not provide code to order, so you will need to recruit someone yourself.
|
|
|
|
|
So, this question is bugging me because my calculus is rusty. . . So here I am seeking help. I thought the answer was '0' but I dismissed it, because 0 implies instant termination and therefore a crash. But, if I pick '0' then 2^n = 1, and 100n^2 = 0, which makes 100n^2 faster. Of course, but that shouldn't be correct.
When I punch in n = 32, 2^32 = 4,294,967,296
32*32 *100 = 102400
20*20 *100 = 40,000
So I brute forced to find the first occurrence of 100*n^2 being faster than 2^n, it was 2^15.
How could I have found this efficiently?
|
|
|
|
|
You could always cheat and use Wolfram|Alpha:
https://www.wolframalpha.com/input/?i=100+n%5E2+%3C+2%5En[^]
The solution seems to involve the Lambert W-Function[^], so it's not simple.
Looking at the generated number line, the lowest non-zero solution is 14.3247, which matches your brute-force solution.
Given the small input range (1-31), brute force is probably still the best option:
int n = Enumerable.Range(1, 31).First(n => 100 * n * n < Math.Pow(2, n));
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Hi friend, lol. . .But Batman doesn't cheat, so I won't either!
Anyway, I found a solution a couple of days ago, I wanna share it with you.
The answer was 15, 14 is still too small.
There are two ways I know how to do this after stackexchange senpai's have taught me.
The first is Binary search.
You take a range from let's say 10-20 since it was easy to isolate.
you do |20+10|/2 = n. Then you plug n in, if n satisfies the condition, you're done.
Another way is the fancier and effective way.
You log both sides of the equation. log2^n and log100n^2
Which gives you n * log2 = log100 + 2logn
which now gives n = log100 + 2logn
which is then in turn n = 7 + 2log(n)
n = 7 + 2log(n) is now your f(n).
So you just plug n into f(n) and if f(n) is true, that is your result.
|
|
|
|
|
Some of the techniques require building chains that jump from one candidate to another, sometimes to different cells, sometimes to different candidates within the same cell.
The chain lengths can be anywhere between 4 and 30 nodes long, and the amount of possible chains that can be made is quite a large number.
My code is technically working and can find the chains I am looking for, but it can take 20 seconds to find a single useful chain sometimes.
Right now I am building the chains using nodes in a tree in a linked-list format, and using recursion to continue the chain. When a terminal node is found it is added to a list to be tested. Then it breaks the chain into smaller ones and tries them too.
The problem is that I am probably building chains more than once (because a chain is the the same both forwards and backwards), and I am not sure how I can eliminate redundancy.
Or maybe the problem is that I am using recursion and I should be using iteration, but I am not sure the best method to iterate with.
This is the recursive call within the function:
if (currNode.children.Count > 0)
{
for (int n = 0; n < currChildCount; n++)
{
temp = currNode.children[n];
BuildAIC(temp, temp.note, terminalNodes, !findStrongLink);
When the recursive loop finishes, I will have all the chains that can be built starting from one point.
I iterate through each point and each note in each point and then check the terminal nodes like this:
foreach (Point p in UnsolvedCells)
{
foreach (int note in cells[p.Y, p.X].notes.ToList())
{
ancestor = new NodeAIC(p, note);
BuildAIC(ancestor, note, terminalNodes, true);
for (int n = 0; n < terminalNodes.Count; n++)
{
Can anyone give me some ideas?
|
|
|
|
|
Seems to me that a "Sudoku solver" "learns" with each pass and tags cells as to possible and not possible for the number range.
I don't see any learning in your version.
The Master said, 'Am I indeed possessed of knowledge? I am not knowing. But if a mean person, who appears quite empty-like, ask anything of me, I set it forth from one end to the other, and exhaust it.'
― Confucian Analects
|
|
|
|
|
|
Hi guys,
I'm studying for a computing exam and came past the following question on a past paper and need help with it.
When would algorithm A be slower than algorithm B? Demonstrate your answer with the help of an example.
Algorithm A
SET sum TO 0
FOR i=1 to size
FOR j=1 to 10000
sum=sum+1
Algorithm B
SET sum TO 0
FOR i=1 to size
FOR j=1 to size
sum=sum + 1
I came up with this answer but not sure if it is correct:
The algorithm A will be slower than algorithm B when the performance of the algorithm is directly proportional to the cubed or more of the size of the input data set, for example if the Big O notation becomes O(N3) or O(N4) or O(N5) etc. The Big O notation O(N3) nesting the for loops in two more for loops:
Set Sum TO 0
For i=1 to size
For k=1 to size
For l=1 to size
For j=1 to 10000
sum=sum+1
|
|
|
|
|
Member 14525747 wrote: The algorithm A will be slower than algorithm B when the performance of the algorithm is directly proportional to the cubed or more of the size of the input data set While I can't look into the mind of the designer of the question, I'm pretty sure that the intent was to analyze the algorithms as they are, so "when" means "for what size ", and not "what if I change the algorithm".
|
|
|
|
|
A runs "longer" than B when size < 10,000.
A runs the same as B when size == 10,000.
A runs faster than B when size > 10,000.
Or, A is slower than B while size < 10,000.
(It was called "playing computer" or desk checking).
The Master said, 'Am I indeed possessed of knowledge? I am not knowing. But if a mean person, who appears quite empty-like, ask anything of me, I set it forth from one end to the other, and exhaust it.'
― Confucian Analects
|
|
|
|