|
Don't go by the name,you could get very good people from companies whose names are un-heard of,or the other category from companies that are "Top".My suggestion if the next time you are offshoring some work do not go by the company name but have a good interaction with the team that is working on the specs and if they are not upto the mark as a customer I think you have the privilege of getting a new team(not sure though) and do not settle for something less.every $ is valuable.
Sastry
|
|
|
|
|
Its a reoccurring compliant I hear time and time again about off-shoring to India. Funnily enough it's not a reputation that applies to ex-pat Indians. If anything, its the opposite.
"You get that on the big jobs."
|
|
|
|
|
I was a part of the Offshore team once.Trust me not all are security experts.After you have mentioned that security is a big deal in the application and that has to be taken care of,all I can say is only one thing there are hardly any security experts in the team that the work has been delivered to.Since majority of persons who manage the projects look for people who can do DB and UI work so they ignore the fact that in web application security is critical.Well out of my experience in working with a few large IT firms in INDIA,the more they look for is all the test cases executed and has client not come back with any defects.So I would suggest to mark it as a defect and then they would get the right persons to do the job.In this case most of the web app security is done by experts in Onsite and then the DLL is sent back to be used by the team at offshore.So offshore has no idea how it gets implemented.
All the team members in offshore cannot tell which of MD5 or SHA1 is more secure for hashing.So my suggestion would be next time you are looking up for people in offshore to work on projects for security,look up their project profiles to see if they done anything of that sort previously or not.I hope you would get a chance to look at the profiles of the team before they get to start work on projects,if not you can always request for the profiles before hiring and they must provide that if it is one of the top companies.
Sastry
|
|
|
|
|
As a rule of thumb never outsource critical parts of an application. Outsourced teams are there to do the grunt work; keep anything critical in house to be worked on by domain experts.
This is not to say that outsourced developers are any better or worse than onshore developers; it's simply a lot easier to manage an onshore team than one that is thousands of miles away, in a different time zone and with cultural differences that are not always obvious.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
nils illegitimus carborundum
me, me, me
|
|
|
|
|
I too was forced to work with an off-shore Indian company.
I was explaining to them that the file was binary.
Someone spoke up and said "I looked at the file and it's not binary as it contains more than ones and zeros."
Things did not get better from there!
<>
|
|
|
|
|
That was the Best Joke I ever heard and by the way who is the computer genius
Sastry
|
|
|
|
|
|
Either you made it up or those guys were really that ignorant!
|
|
|
|
|
I promise it really happened! After that I had to explain why the output from my 10-bit A/D was being sent 16-bits. That didn't go any better.
<>
|
|
|
|
|
We've picked up quite a bit of work from clients who've had enough of the crap that outsourcing companies produce.
|
|
|
|
|
God bless the Indian Firms. I have made $1000s of dollars "fixing" and making legal, code generated overseas. For 10 years, it was my bread and butter. The upfront cost of doing business with Indian shops is cheaper up front but the costs rise rapidly when the company has to hire me.
|
|
|
|
|
...and they probably got the coding idea by posting a question on Code Project asking 'can someone give me code to....'
|
|
|
|
|
We had an Indian company taking our code and converting it. In our initial discussions I stated two architectural requirements and they later stated I never said them!!! Then they said that they wanted more money due to meeting my specs.
So when we had our next big meeting I gave them the requirement of 300 txn per second and would not let the Indian move away from the subject until he wrote it down on the board as a requirement. (he tried to pass over it stating that it was "standard" or some kind of bull cookie)
|
|
|
|
|
Uhm, well Microsoft certainly does not want to duplicate code. This is a dotPeek disassmbly the framework reference source code of a .NET 4.0 Tuple helper class
public static class Tuple
{
internal static int CombineHashCodes(int h1, int h2)
{
return (h1 << 5) + h1 ^ h2;
}
internal static int CombineHashCodes(int h1, int h2, int h3)
{
return Tuple.CombineHashCodes(Tuple.CombineHashCodes(h1, h2), h3);
}
internal static int CombineHashCodes(int h1, int h2, int h3, int h4)
{
return Tuple.CombineHashCodes(Tuple.CombineHashCodes(h1, h2), Tuple.CombineHashCodes(h3, h4));
}
internal static int CombineHashCodes(int h1, int h2, int h3, int h4, int h5)
{
return Tuple.CombineHashCodes(Tuple.CombineHashCodes(h1, h2, h3, h4), h5);
}
internal static int CombineHashCodes(int h1, int h2, int h3, int h4, int h5, int h6)
{
return Tuple.CombineHashCodes(Tuple.CombineHashCodes(h1, h2, h3, h4), Tuple.CombineHashCodes(h5, h6));
}
internal static int CombineHashCodes(int h1, int h2, int h3, int h4, int h5, int h6, int h7)
{
return Tuple.CombineHashCodes(Tuple.CombineHashCodes(h1, h2, h3, h4), Tuple.CombineHashCodes(h5, h6, h7));
}
internal static int CombineHashCodes(int h1, int h2, int h3, int h4, int h5, int h6, int h7, int h8)
{
return Tuple.CombineHashCodes(Tuple.CombineHashCodes(h1, h2, h3, h4), Tuple.CombineHashCodes(h5, h6, h7, h8));
}
}
So... when you want to have a hash for a Tuple of 8 values, you get 7 function call overhead on a stack. Nice...
Greetings - Jacek
modified 18-Apr-12 18:19pm.
|
|
|
|
|
Yikes!
Attempting to load signature...
A NullSignatureException was unhandled.
Message: "No signature exists"
All of the books in the world contain no more information than is broadcast as video in a single large American city in a single year. Not all bits have equal value.
Carl Sagan
|
|
|
|
|
They might get inlined.. I'm pretty sure the deepest one will be, at least.
But yea, it's .. odd.
|
|
|
|
|
harold aptroot wrote: They might get inlined.
7-level deep inlining algorithm in JIT? That would be impressive.
Greetings - Jacek
|
|
|
|
|
For the fun of it I tested your version against another version without the function calls. For this I just manually inlined all calls:
public static class Tuple2
{
internal static int CombineHashCodes(int h1, int h2, int h3, int h4, int h5, int h6, int h7, int h8)
{
var h12 = (h1 << 5) + h1 ^ h2;
var h34 = (h3 << 5) + h3 ^ h4;
var h1234 = (h12 << 5) + h12 ^ h34;
var h56 = (h5 << 5) + h5 ^ h6;
var h78 = (h7 << 5) + h7 ^ h8;
var h5678 = (h56 << 5) + h56 ^ h78;
return (h1234 << 5) + h1234 ^ h5678;
}
}
Just to be sure to optimize performance I even removed temp variables where possible (possible meaning "without having to calculate the same thing twice):
public static class Tuple3
{
internal static int CombineHashCodes(int h1, int h2, int h3, int h4, int h5, int h6, int h7, int h8)
{
var h12 = (h1 << 5) + h1 ^ h2;
var h34 = (h3 << 5) + h3 ^ h4;
var h1234 = (h12 << 5) + h12 ^ h34;
var h56 = (h5 << 5) + h5 ^ h6;
return (h1234 << 5) + h1234 ^ ((h56 << 5) + h56 ^ ((h7 << 5) + h7 ^ h8));
}
}
I tested your .NET disassembly code against this and the results show nearly no performance difference for my first alternative and only a minor (10-15% slower!!!) difference for the second one (can anybody tell me why this one is slower?!?). As the .NET code is clearly better readable and maintainable I would say they did everything correct.
Here my test code:
static void Main(string[] args)
{
var sw1 = new Stopwatch();
var sw2 = new Stopwatch();
var sw3 = new Stopwatch();
var res = 0;
for (int j = 0; j < 3; j++)
{
Console.WriteLine("{0}. run", j + 1);
sw1.Reset();
res = 0;
sw1.Start();
for (int i = 0; i < 10000000; i++)
res += Tuple1.CombineHashCodes(i, i, i, i, i, i, i, i);
sw1.Stop();
Console.WriteLine("Test 1: {0} Time: {1}", res, sw1.Elapsed);
sw2.Reset();
res = 0;
sw2.Start();
for (int i = 0; i < 10000000; i++)
res += Tuple2.CombineHashCodes(i, i, i, i, i, i, i, i);
sw2.Stop();
Console.WriteLine("Test 2: {0} Time: {1} (+{2:f2}%)", res, sw2.Elapsed, (sw2.ElapsedTicks - sw1.ElapsedTicks) * 100 / sw1.ElapsedTicks);
sw3.Reset();
res = 0;
sw3.Start();
for (int i = 0; i < 10000000; i++)
res += Tuple3.CombineHashCodes(i, i, i, i, i, i, i, i);
sw3.Stop();
Console.WriteLine("Test 3: {0} Time: {1} (+{2:f2}%)", res, sw3.Elapsed, (sw3.ElapsedTicks - sw1.ElapsedTicks) * 100 / sw1.ElapsedTicks);
Console.WriteLine();
}
Console.ReadLine();
}
The results:
1. run
Test 1: -1607204864 Time: 00:00:00.0284306
Test 2: -1607204864 Time: 00:00:00.0286526 (+0,78%)
Test 3: -1607204864 Time: 00:00:00.0316660 (+11,38%)
2. run
Test 1: -1607204864 Time: 00:00:00.0281104
Test 2: -1607204864 Time: 00:00:00.0281490 (+0,14%)
Test 3: -1607204864 Time: 00:00:00.0319674 (+13,72%)
3. run
Test 1: -1607204864 Time: 00:00:00.0281114
Test 2: -1607204864 Time: 00:00:00.0285663 (+1,62%)
Test 3: -1607204864 Time: 00:00:00.0313268 (+11,44%)
Just to be complete: The test results where measured in release mode outside of Visual Studio started as a console application. Inside the debugger the original version shows a very significant slowdown.
|
|
|
|
|
Did you try to wrap a whole thing in a unchecked { } block?
You could try to make JIT compile your methods before testing:
int i = 1;
Tuple1.CombineHashCodes(i, i, i, i, i, i, i, i);
sw1.Reset();
res = 0;
sw1.Start();
for (i = 0; i < 10000000; i++)
res += Tuple1.CombineHashCodes(i, i, i, i, i, i, i, i);
sw1.Stop();
Console.WriteLine("Test 1: {0} Time: {1}", res, sw1.Elapsed);
internal static int CombineHashCodes(int h1, int h2, int h3, int h4, int h5, int h6, int h7, int h8)
{
unchecked{
var h12 = (h1 << 5) + h1 ^ h2;
var h34 = (h3 << 5) + h3 ^ h4;
var h1234 = (h12 << 5) + h12 ^ h34;
var h56 = (h5 << 5) + h5 ^ h6;
var ret = (h1234 << 5) + h1234 ^ ((h56 << 5) + h56 ^ ((h7 << 5) + h7 ^ h8));
}return ret;
}
Also, I would change the testing code:
sw1.start
for (int j = 0; j < 1000; j++)
{
}
sw1.stop
sw2...
for (int j = 0; j < 1000; j++)
{
}
sw3...
for (int j = 0; j < 1000; j++)
{
}
Greetings - Jacek
modified 17-Apr-12 7:00am.
|
|
|
|
|
1. The arithmetic overflow check is completely disabled in my solution (which ist the default setting; at least for VS 2010 console apps). Otherwise my whole test wouldn't work because I add so many large numbers that an OverflowException would occur.
2. Regarding the JIT: Thats why I made 3 (the for j loop) seperately stopped complete test runs. I always ignore the first result.
|
|
|
|
|
OK. I have also posted a suggestion about increasing a number of tests and averaging results. Did you try it?
Greetings - Jacek
|
|
|
|
|
I really don't see why this should make a difference here, but to avoid a big discussion on how the JIT works I rewrote my tests.
First I make one test run just for the JIT. Then I make 1000 test runs for each test case and in each test run 10.000.000 calls to CombineHashCodes are done.
Here the results:
Test 1: -1607204864 Time Avg: 00:00:00.0281418 (100,00%)
Test 2: -1607204864 Time Avg: 00:00:00.0281436 (100,01%)
Test 3: -1607204864 Time Avg: 00:00:00.0312717 (111,12%)
I feel the JIT optimizer does inline those nested calls exactly as I did.
Here the complete code:
class Program
{
static void Main(string[] args)
{
var sw1 = new Stopwatch();
var sw2 = new Stopwatch();
var sw3 = new Stopwatch();
var res1 = 0;
var res2 = 0;
var res3 = 0;
res1 = Test1(sw1, res1);
res2 = Test2(sw2, res2);
res3 = Test3(sw3, res3);
sw1.Reset();
sw2.Reset();
sw3.Reset();
int noOfRuns = 1000;
for (int j = 0; j < noOfRuns; j++)
{
res1 = Test1(sw1, res1);
res2 = Test2(sw2, res2);
res3 = Test3(sw3, res3);
}
var avg1 = sw1.Elapsed.Ticks / noOfRuns;
var avg2 = sw2.Elapsed.Ticks / noOfRuns;
var avg3 = sw3.Elapsed.Ticks / noOfRuns;
Console.WriteLine("Test 1: {0} Time Avg: {1} ({2:f2}%)", res1, new TimeSpan(avg1), 100.0);
Console.WriteLine("Test 2: {0} Time Avg: {1} ({2:f2}%)", res2, new TimeSpan(avg2), avg2 * 100.0 / avg1);
Console.WriteLine("Test 3: {0} Time Avg: {1} ({2:f2}%)", res3, new TimeSpan(avg3), avg3 * 100.0 / avg1);
Console.ReadLine();
}
private static int Test3(Stopwatch sw3, int res3)
{
res3 = 0;
sw3.Start();
for (int i = 0; i < 10000000; i++)
res3 += Tuple3.CombineHashCodes(i, i, i, i, i, i, i, i);
sw3.Stop();
return res3;
}
private static int Test2(Stopwatch sw2, int res2)
{
res2 = 0;
sw2.Start();
for (int i = 0; i < 10000000; i++)
res2 += Tuple2.CombineHashCodes(i, i, i, i, i, i, i, i);
sw2.Stop();
return res2;
}
private static int Test1(Stopwatch sw1, int res1)
{
res1 = 0;
sw1.Start();
for (int i = 0; i < 10000000; i++)
res1 += Tuple1.CombineHashCodes(i, i, i, i, i, i, i, i);
sw1.Stop();
return res1;
}
}
public static class Tuple1
{
internal static int CombineHashCodes(int h1, int h2)
{
return (h1 << 5) + h1 ^ h2;
}
internal static int CombineHashCodes(int h1, int h2, int h3)
{
return Tuple1.CombineHashCodes(Tuple1.CombineHashCodes(h1, h2), h3);
}
internal static int CombineHashCodes(int h1, int h2, int h3, int h4)
{
return Tuple1.CombineHashCodes(Tuple1.CombineHashCodes(h1, h2), Tuple1.CombineHashCodes(h3, h4));
}
internal static int CombineHashCodes(int h1, int h2, int h3, int h4, int h5)
{
return Tuple1.CombineHashCodes(Tuple1.CombineHashCodes(h1, h2, h3, h4), h5);
}
internal static int CombineHashCodes(int h1, int h2, int h3, int h4, int h5, int h6)
{
return Tuple1.CombineHashCodes(Tuple1.CombineHashCodes(h1, h2, h3, h4), Tuple1.CombineHashCodes(h5, h6));
}
internal static int CombineHashCodes(int h1, int h2, int h3, int h4, int h5, int h6, int h7)
{
return Tuple1.CombineHashCodes(Tuple1.CombineHashCodes(h1, h2, h3, h4), Tuple1.CombineHashCodes(h5, h6, h7));
}
internal static int CombineHashCodes(int h1, int h2, int h3, int h4, int h5, int h6, int h7, int h8)
{
return Tuple1.CombineHashCodes(Tuple1.CombineHashCodes(h1, h2, h3, h4), Tuple1.CombineHashCodes(h5, h6, h7, h8));
}
}
public static class Tuple2
{
internal static int CombineHashCodes(int h1, int h2, int h3, int h4, int h5, int h6, int h7, int h8)
{
var h12 = (h1 << 5) + h1 ^ h2;
var h34 = (h3 << 5) + h3 ^ h4;
var h1234 = (h12 << 5) + h12 ^ h34;
var h56 = (h5 << 5) + h5 ^ h6;
var h78 = (h7 << 5) + h7 ^ h8;
var h5678 = (h56 << 5) + h56 ^ h78;
return (h1234 << 5) + h1234 ^ h5678;
}
}
public static class Tuple3
{
internal static int CombineHashCodes(int h1, int h2, int h3, int h4, int h5, int h6, int h7, int h8)
{
var h12 = (h1 << 5) + h1 ^ h2;
var h34 = (h3 << 5) + h3 ^ h4;
var h1234 = (h12 << 5) + h12 ^ h34;
var h56 = (h5 << 5) + h5 ^ h6;
return (h1234 << 5) + h1234 ^ ((h56 << 5) + h56 ^ ((h7 << 5) + h7 ^ h8));
}
}
|
|
|
|
|
Try swapping the order of the tests in as many ways as there are permutations, since the JITting cost is mostly payed at the first invocations.
Time you enjoy wasting is not wasted time - Bertrand Russel
|
|
|
|
|
When you declare your own temporary variables, you save the JIT compiler or runtime the time needed to generate them itself where necessary. That might explain the speed difference.
|
|
|
|
|
I suspect your disassembly utility may not understand the "params" form in a C# call. I seriously doubt that Microsoft would write it the way your utility is showing it.
It's more likely coded like this:
internal static int CombineHashCodes(params int[] h)
int returnHash = 0;
{
for (int i=0; i<h.length; ++i) {
returnHash = (returnHash << 5) + returnHash ^ h[i];
}
}
return returnHash;
}
// ... or something like that!
Although I would suspect that combining 8 hash codes is going to overflow a 32-bit int ... but you get my point!
-Max
|
|
|
|
|