|
Hi friend, lol. . .But Batman doesn't cheat, so I won't either!
Anyway, I found a solution a couple of days ago, I wanna share it with you.
The answer was 15, 14 is still too small.
There are two ways I know how to do this after stackexchange senpai's have taught me.
The first is Binary search.
You take a range from let's say 10-20 since it was easy to isolate.
you do |20+10|/2 = n. Then you plug n in, if n satisfies the condition, you're done.
Another way is the fancier and effective way.
You log both sides of the equation. log2^n and log100n^2
Which gives you n * log2 = log100 + 2logn
which now gives n = log100 + 2logn
which is then in turn n = 7 + 2log(n)
n = 7 + 2log(n) is now your f(n).
So you just plug n into f(n) and if f(n) is true, that is your result.
|
|
|
|
|
Some of the techniques require building chains that jump from one candidate to another, sometimes to different cells, sometimes to different candidates within the same cell.
The chain lengths can be anywhere between 4 and 30 nodes long, and the amount of possible chains that can be made is quite a large number.
My code is technically working and can find the chains I am looking for, but it can take 20 seconds to find a single useful chain sometimes.
Right now I am building the chains using nodes in a tree in a linked-list format, and using recursion to continue the chain. When a terminal node is found it is added to a list to be tested. Then it breaks the chain into smaller ones and tries them too.
The problem is that I am probably building chains more than once (because a chain is the the same both forwards and backwards), and I am not sure how I can eliminate redundancy.
Or maybe the problem is that I am using recursion and I should be using iteration, but I am not sure the best method to iterate with.
This is the recursive call within the function:
if (currNode.children.Count > 0)
{
for (int n = 0; n < currChildCount; n++)
{
temp = currNode.children[n];
BuildAIC(temp, temp.note, terminalNodes, !findStrongLink);
When the recursive loop finishes, I will have all the chains that can be built starting from one point.
I iterate through each point and each note in each point and then check the terminal nodes like this:
foreach (Point p in UnsolvedCells)
{
foreach (int note in cells[p.Y, p.X].notes.ToList())
{
ancestor = new NodeAIC(p, note);
BuildAIC(ancestor, note, terminalNodes, true);
for (int n = 0; n < terminalNodes.Count; n++)
{
Can anyone give me some ideas?
|
|
|
|
|
Seems to me that a "Sudoku solver" "learns" with each pass and tags cells as to possible and not possible for the number range.
I don't see any learning in your version.
The Master said, 'Am I indeed possessed of knowledge? I am not knowing. But if a mean person, who appears quite empty-like, ask anything of me, I set it forth from one end to the other, and exhaust it.'
― Confucian Analects
|
|
|
|
|
|
Hi guys,
I'm studying for a computing exam and came past the following question on a past paper and need help with it.
When would algorithm A be slower than algorithm B? Demonstrate your answer with the help of an example.
Algorithm A
SET sum TO 0
FOR i=1 to size
FOR j=1 to 10000
sum=sum+1
Algorithm B
SET sum TO 0
FOR i=1 to size
FOR j=1 to size
sum=sum + 1
I came up with this answer but not sure if it is correct:
The algorithm A will be slower than algorithm B when the performance of the algorithm is directly proportional to the cubed or more of the size of the input data set, for example if the Big O notation becomes O(N3) or O(N4) or O(N5) etc. The Big O notation O(N3) nesting the for loops in two more for loops:
Set Sum TO 0
For i=1 to size
For k=1 to size
For l=1 to size
For j=1 to 10000
sum=sum+1
|
|
|
|
|
Member 14525747 wrote: The algorithm A will be slower than algorithm B when the performance of the algorithm is directly proportional to the cubed or more of the size of the input data set While I can't look into the mind of the designer of the question, I'm pretty sure that the intent was to analyze the algorithms as they are, so "when" means "for what size ", and not "what if I change the algorithm".
|
|
|
|
|
A runs "longer" than B when size < 10,000.
A runs the same as B when size == 10,000.
A runs faster than B when size > 10,000.
Or, A is slower than B while size < 10,000.
(It was called "playing computer" or desk checking).
The Master said, 'Am I indeed possessed of knowledge? I am not knowing. But if a mean person, who appears quite empty-like, ask anything of me, I set it forth from one end to the other, and exhaust it.'
― Confucian Analects
|
|
|
|
|
Fonction f(a: entier, b: entier): integer
var r, z: integer
Begin
r <-- 0
z <-- 1
while(a != 0)
r <-- r + (a mod 10) * z
z <-- z * b
a <-- a div 10
endWhile
return r
End
|
|
|
|
|
The best way to find out is to change this pseudo-code into a real function.
Tip: if i'm not wrong it returns a
Good luck!
|
|
|
|
|
Quote: what does this algorithm do ?
the first thing to do is to make it a program and run it with many sample data and analyze the results.
Another way is to simulate on paper and note the values of variable as it process the data.
Patrice
“Everything should be made as simple as possible, but no simpler.” Albert Einstein
|
|
|
|
|
I want to ask one general question here, We have seen 32 bit and 64 bit Operating System.
Is there any other larger size bit Operating System exist? If yes can you please share the link to download it.? i want the best speed operating system.
|
|
|
|
|
Oh dear ... this question is proof that "a little knowledge is a dangerous thing".
No, there aren't "larger OSes than 64 bit": And if there were, they probably wouldn't be faster (in fact the might be slower) and they wouldn't run on your hardware because it doesn't support 128 bit operations. And it would need crazy amounts of RAM as all pointers would be 128 bits.
You want a fast operating system? Go backwards to DOS - it's fast a heck compared to any modern OS, simply because it is small, and doesn't support all the "bells and whistles" that Windows or Linux do: no GUI for example.
Instead of thinking "fastest possible OS", think parallel processing and distribute your task across multiple processors: get it right and it's both scalable and dramatically faster than changing your OS...
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Ten years ago there were rumours circulating that Microsoft was working on a 128-bit version of Windows. They were aiming for Windows 8, or definitely Windows 9.
Maybe that's why Windows skipped straight to v10?
Microsoft Working on 128-bit Windows[^]
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Below is the link to the problem and my code. I could only pass case 1 with my code. And other cases are showing wrong answers. It would be of great help if anyone could identify the errors in my code.
Problem Link
#include<bits/stdc++.h>
using namespace std;
vector<long long int>freq(1000000,0);
void merge(vector<long long int>&v,long long int low,long long int mid,long long int high)
{
long long int count=0;
long long int i = low,j = mid+1,k=0;
vector<long long int>temp(high-low+1);
for(long long int p=low;p<=high;p++)
{
if(i>mid)
{
temp[k++] = v[j++];
}
else if(j>high)
{
temp[k++] = v[i];
freq[v[i]]+=count;
i++;
}
else if(v[i]<=v[j])
{
temp[k++] = v[i];
freq[v[i]]+=count;
i++;
}
else
{
temp[k++] = v[j++];
count++;
}
}
for(long long int p = 0;p<k;p++)
{
v[low+p] = temp[p];
}
}
void merge_sort(vector<long long int>&v,long long int low,long long int high)
{
if(low>=high)
{
return;
}
long long int mid = (low+high)/2;
merge_sort(v,low,mid);
merge_sort(v,mid+1,high);
merge(v,low,mid,high);
}
int main()
{
long long int t;
cin>>t;
while(t--)
{
long long int n,x;
cin>>n;
vector<long long int> v;
for(long long int i=0;i<n;i++)
{
cin>>x;
v.push_back(x);
}
vector<long long int>w = v;
merge_sort(v,0,n-1);
for(long long int i=0;i<n;i++)
{
cout<<freq[w[i]]<<" ";
}
cout<<endl;
}
}
|
|
|
|
|
You need to provide more details: explain what inputs you are using, what outputs you expect, and what you actually get. Also show where in the code the problems occur.
|
|
|
|
|
Level 1 | Level 2 | Level 3 | ID
__A____________________1
_________AA____________2
________________AA1____3
________________AA2____4
_________AB____________5
__B____________________6
__C____________________7
_________CA____________8
________________CA1____9
________________CA2____10
__D____________________11
_________DA____________12
underscores are for formatting purpose.
I thought it is a hashed directory structure but one of information is that there are 4 nodes and first( A ) is has only 2 subnodes (or it is mistake in description).
What algorithm or structure it can be?
or maybe it is labeled tree?
|
|
|
|
|
It looks like a tree. But since the above diagram is the only information you provided, it could actually be anything.
|
|
|
|
|
Hello,
Yesterday, I was asked a question in the interview where I have to make an algorithm that will find the maximum sum in a given array such that the elements added doesn't have any common digits.
for example our array is: 31,44,46,63,65
So we cant add the elements: 31,63 as 3 is common But we can take up the elements 44, 65, 31 as none digits are common in these elements and also 65 > 63.
Can anyone think of a good approach in this?
Also, what if someone else tomorrow ask me another question which is as tough as this one is. So how can I prepare these types of questions? What material should I look at? Any suggestion would be appreciated a lot.
Thanks in advance.
|
|
|
|
|
|
Discuss Xamarin Platform in terms of the following subtopics.
Mobile SDLC
Language and support
perfomance and security
rival technology
summary
|
|
|
|
|
We are more than willing to help those that are stuck: but that doesn't mean that we are here to do it all for you! We can't do all the work, you are either getting paid for this, or it's part of your grades and it wouldn't be at all fair for us to do it all for you.
So we need you to do the work, and we will help you when you get stuck. That doesn't mean we will give you a step by step solution you can hand in!
Start by explaining where you are at the moment, and what the next step in the process is. Then tell us what you have tried to get that next step working, and what happened when you did.
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Hi Everyone!!
I have a list of objects, each one has a weight and a value.
So an array like this
[ [O1, V1, W1], [O2, V2, W2] ... [On, Vn, Wn]]
The goal is to get the sublist of objects with the highest value without exceeding the weight limit.
So I get an array like this
[ [O1, O2, O4, O7], V, W]
where V = V1 + V2 + V4 + V7 and W = W1 + W2 + W4 + W7
and W <= Wmax and V is the highest possible value.
Of course, I can build all the valid lists and sort them.
But it may be long!
Is there a more efficient algorithm?
Thanks everyone for your consideration!
-- modified 1-May-19 17:06pm.
|
|
|
|
|
We are more than willing to help those that are stuck: but that doesn't mean that we are here to do it all for you! We can't do all the work, you are either getting paid for this, or it's part of your grades and it wouldn't be at all fair for us to do it all for you.
So we need you to do the work, and we will help you when you get stuck. That doesn't mean we will give you a step by step solution you can hand in!
Start by explaining where you are at the moment, and what the next step in the process is. Then tell us what you have tried to get that next step working, and what happened when you did.
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
What I did:
1) I build a simple algorithm that gives an good approximation.
2) I searched algorithm to known problems close to mine.
I saw the ship Berthing problem, but it seems to be different.
What I search:
I want to know if my problem is a classic one (or close to). And so, if an algorithm has been published to solve it.
I didn't ask you to build a specific solution, but to help me to search in the right direction.
Thanks everyone for your consideration!
|
|
|
|
|
I shouldn't be telling you this, but the answers to your questions are Yes and No.
Yes this is a classic (I mean classical) problem (it has been extensively studied since 1897), and No there is no algorithm that can find an optimal solution in polynomial time (i.e. quickly enough). You should also be advised that if you do discover such an algorithm for this problem, you will earn the one million dollar prize put up by the Clay Mathematics Institute for being the first person to show that P = NP.
I suppose you want to know what this classical problem is called. But as OriginalGriff quite rightly said, you should think about whether there are any optimizations you can make to your simple algorithm. Such things generally involve noting under what circumstances can some aspect of the problem be simplified. One might, for example, investigate the circumstances that would lead you to conclude that item A will never be part of an optimal solution because you can always do better than item A if you were to replace it with a set of items B, C, ... that taken together are both lighter and more valuable than A. Or perhaps you could define some substitution arrangement that is more complicated than just swapping out one item for another item as you pack your … er … um …
knapsack.
|
|
|
|
|