|
yes I hear you (though I dont know assembly programming)
Binary patch is the bottom line - if hacker knows where the code is which makes the comparison
<br />
IsAuthorized = LicenseController.CheckPermission("ModuleName");<br />
if(IsAuthorized)<br />
{<br />
... load the module ..<br />
}<br />
The question is, how hard/easy to spot the line? If code is obfuscated (In machine code it'd be same cmp command but there're still thousands of them hacker would need to try patching them one by one, right? This leads me to thinking dynamically generate millions of comparison operators [which is never on actual runtime execution path of course] in .NET code to further throw the sucker off the trail)
Another question is where to store decryption key?
dev
modified on Saturday, March 12, 2011 7:58 PM
|
|
|
|
|
For native executables, you can attach to the process with a debugger and if the application throws up a message box saying "you don't have a license", you've basically brought them very close to the code path they are looking for. Also, there are various techniques to creating breakpoints that will make the porcess of tracing through much easier since they will be in the vicinity of the code, even if you don't give many hints.
I had a licensing scheme that obfuscated all sensitive string literals, comparison keys, etc... using a blowfish encryption scheme and I dynamically generated the encryption key using a function. All it took to bypass my code was to open the executable in binary mode (using Visual Studio) going to the byte sequence I had narrowed it down to using a simple debug trace of the process, and I replaced the appropriate bytes in the executable with the opcode for "NOP" (0x90) to eliminate the branch statement path that would have exited the application if the license was not correct and saved my settings. I was able to run the executable with it bypassing all my licensing code. All my hard work trying to protect my app was basically useless.
This was with a native app. Someone trying to manipulate your managed code will probably have an easier time backward engineering your efforts (obfuscated or not) and will easily bypass any protective licensing mechanism. Every book I have on security basically states, "don't rely on hiding secrets in your code". I would have to agree with that statement but I would still say it's not a complete waste of time trying to discourage reverse enginerring to some degree. Just don't get too convinced that you'll be able to completely prevent it as it is an exercise in futility, unfortunately.
I'd just hate to see someone else spend too much time on this endeavor since no matter how complicated you make it, it will probably boil down to a simple if/else statement in the end and those are easily manipulated.
Some may scold me for talking about "how to" here as they will say it is giving ammunition to those who want to bypass our code protections but I would argue that every security book considers this "false sense of security 101" and usually talks about this within the first 100 pages anyway. I think we are stuck with this reality, as unfortunate as it is.
Anyway, I would still encourage you to obfuscate to some degree to help protect your intellectual property.
Good luck with your project.
|
|
|
|
|
Many thanks.
I think I'm included to implement *many* licensing checks at different places and dummy if-else (generate a dummy cs file/class with a simple console program) to throw them off the track (security by obscurity yes, but I'm not worried about political correctness)
Also, that "You do not have license" message box should comes from a thread, with variable delay.
dev
|
|
|
|
|
Hello...
i have found some issue during, design my project.
my client request is they want improve system performance to almost max limit.
so i suggest Parallel programming solution to my client.
but how can i run my each multi thread in each core ?
i need each thread to run on each core. ( that base on c# language. )
current system model is running multi thread on multi core, but they can't control one to one structure.
For example, on quad core system model that have three business logic thread and have one monitor thread And they need running one to one.
do you have any solution in this case?
thank for your read.
SungBae.Han
|
|
|
|
|
This[^] Stack Overflow article may help.
------------------------------------
I will never again mention that I was the poster of the One Millionth Lounge Post, nor that it was complete drivel. Dalek Dave
CCC Link[ ^]
Trolls[ ^]
|
|
|
|
|
oops. i understand.
thanks for your reply.
|
|
|
|
|
Thread pool is more nice solution about multi thread.
but current status is max in cpu.
even always each cpu is full.
client will be add more server. but, current status will be continues.
because they are running every day big size request process.
so i want more detail control cpu core.
can i do it?
|
|
|
|
|
You could have a look at the Thread class, paying attention to "processor affinity".
I have strong doubts it makes much sense on any Windows system though, in my experience it isn't worth anything, i.e. the Windows scheduler does a good job distributing the load over the cores, and not switching threads around unduly. The one situation I would use it is when I had N groups of threads where each group needs to share a limited amount of cached data, so having all its threads on the same core really is relevant.
When I have a number of similar/identical jobs, I tend to create up to 2*N threads where N equals Environment.ProcessorCount; then I let Windows make the best of it, which it does. And I don't use the ThreadPool in such situations.
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
|
|
|
|
|
My database has a few tables with 250,000+ rows. We need to grab a few of these tables at app startup, so it takes a minute or so. Is there any "little known flag" to do that more effeciently?
I guess I could cache a local copy and do a timestamp type thing... I did try using Microsoft Sync Services before, but it was much slower then just hitting the database.
BUT... that was when our DB was a lot smaller. Now it would probably be a lot faster past the initial startup time.
|
|
|
|
|
That number of rows (not forgetting the number of fields per row) is going to take time to load regardless.
Can you not thread it and do the load whilst the user is doing his log on?
------------------------------------
I will never again mention that I was the poster of the One Millionth Lounge Post, nor that it was complete drivel. Dalek Dave
CCC Link[ ^]
Trolls[ ^]
|
|
|
|
|
The user enters his username and password and then it starts grabbing data (in a background thread), but the user can't really use the app til its loaded and that loading takes a while.
|
|
|
|
|
|
And why would your app need to load 250K rows, is the user going to read all that instantaneously?
Use pagination, and load small chunks.
BTW: that is also why this forum shows a limited number of messages at any time.
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
|
|
|
|
|
Damn good answer.
I took him at his word that he needed to load them all at the front end, didn't occur to ask him if it was necessary.
------------------------------------
I will never again mention that I was the poster of the One Millionth Lounge Post, nor that it was complete drivel. Dalek Dave
CCC Link[ ^]
Trolls[ ^]
|
|
|
|
|
Dalek Dave wrote: he needed to load them all at the front end
I don't know, I'm just shaking the tree here.
I sure wouldn't read such a lot it the user's expense, even if that means a complete redesign. Besides, data loaded in memory could become stale pretty soon in a multi-user system.
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
|
|
|
|
|
Luc Pattyn wrote: data loaded in memory could become stale pretty soon in a multi-user
system.
This is true.
Unless there is some exceptional reason, I would think it better to poll the db on request, rather than suck it all in.
There is also the memory resources problem.
------------------------------------
I will never again mention that I was the poster of the One Millionth Lounge Post, nor that it was complete drivel. Dalek Dave
CCC Link[ ^]
Trolls[ ^]
|
|
|
|
|
Yeah, the original concern was preventing UI lag which is why we suck down tables at once. But yeah, this is a multi-user system, so if someone leaves the UI open for a long period of time because they don't like the startup cost, the data could be stale.
|
|
|
|
|
Some of the other large tables, I load "on demand" because I don't really need that data most of the time. I definitely need Table1, but that only has like 5 rows, so no biggie. Table2 has the Table1 -> Table3 (1:M) relationships.
If I say open Table1.Row1Entity then I only really need a subset of Table2 and a subset of Table3, but we are still talking ~350 rows at that level.
Now there is a further level of objects in the tree (this is the 250,000 rows)... I guess I don't need all 250,000 at once, but its going to be around 350 x 108, so 37000 rows when I open Table1.Row1Entity. The 37000 rows are obviously not all displayed at once, so I guess technically I could only grab 108 at a time.
Hmm... I guess there are some optimizations I can make in regards to loading only subsets of data, but I certainly don't want to introduce "pauses" in the UI. I'd rather have 1 pause at startup and have it fast from then on...
Not sure if loading ~300 rows at a time would introduce UI lag?
|
|
|
|
|
loading 1000 rows should be fine, unless your data structure really isn't appropriate and requires a very complex and expensive query returning few results. Such was not suggested by your message.
I suggest you just give it a try with a mock query.
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
|
|
|
|
|
Nope. I only have around 10 stored procs. Everything else is done with parent / child relationships, cascades, etc.
I'll give it a whirl.
|
|
|
|
|
Luc Pattyn wrote: I suggest you just give it a try with a mock query.
The old "Suck it an see" approach!
------------------------------------
I will never again mention that I was the poster of the One Millionth Lounge Post, nor that it was complete drivel. Dalek Dave
CCC Link[ ^]
Trolls[ ^]
|
|
|
|
|
No, get the foundation right.
Spend 1% of your time to make sure you're not about to waste the next 99% of it and ruin your schedule.
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
|
|
|
|
|
On the drive home I came to the realization that I kind of do something similiar to this already.
In our multi-user environment, we have the concept of check-in and check-out of objects. When you check-out an object, it checks-out all the child objects as well and obviously does a "get latest" in the process. This "get latest" code is what I would need to do in this "speed up" mechanism when the user opens the outer most object.
The check-out process does quite a bit to refresh all the dependent objects. You have to grab all the latest dependent objects kind of from the bottom up so you don't run into RI problems. In case you are going to ask, no, I don't get them one at a time . I wrote a few optimized stored procedures that do inner joins to retrieve various levels of dependent objects at once. So we are probably talking about 4 stored procs to refresh the outer most object and all its dependent objects.
Anyways, the last time I ran a timer on a check-out of the outer most object, I think it took 1 - 3 seconds.
So if I only grab subsets of data, I'd introduce a 1 - 3 second delay.
Not much I can do about that I don't think. I think I ran a profiler on the code and saw most of the overhead was just calling the stored procs and merging the data back into the tables.
|
|
|
|
|
If it is a matter of seconds, the user won't mind. And you can humor him by showing a nice splash screen...
Luc Pattyn [Forum Guidelines] [My Articles] Nil Volentibus Arduum
Please use <PRE> tags for code snippets, they preserve indentation, improve readability, and make me actually look at the code.
|
|
|
|
|
Thanks...
I originally implemented it to load full tables to prevent "UI lag". Unfortunately users of the system "abused" the original intent and a few of the tables ballooned to 250k rows :p.
I took your advice and changed it over to just load the chunks of data as they are needed.
Start up time went from 45seconds on my local machine -> 1 to 2 seconds. No noticable UI lag since I'm just loading a few hundred rows.
Offshore guys in India were reporting 5 *minute* start up times. Hopefully they are happier now. I haven't heard how long it takes them now though.
|
|
|
|