Strictly speaking, it is not possible in principle. To
proof illustrate that, one contradictory example is enough, right?
The example about sending out sensitive information is very good. This criteria is not enough, but if such data is sent out, this is a problem. Suppose the detection of this situation is detected. There are non-malicious applications designed to send such information, for example, error information is sent (hopefully based on the customer consent) to the company for the purposes of support and bug fixing (the ugly truth is, the customers are unable to describe what they see, instead they would rather tell what they think they see, which is usually not true). So, the detector of malicious activity will detect legitimate program as malicious.
Is the attempt to delete a file should be considered as malicious? Apparently not, because we should be able to implement file commanders.
On can say, let's demand that a non-malicious operation is only allowed on user's consent. The problem is: it is theoretically impossible to calculate based on the algorithm review. Why? There is a fundamental result of computer science (computability theory,
http://en.wikipedia.org/wiki/Computability_theory[
^]): it is impossible in general to predict what a
Turing-complete program will do over an arbitrarily long time. Likewise, a program may contain a code for user's consent, but how to compute that the program will ever reach this fragment of code? The detector is bound to produce false positives and false negatives.
At the same time, it may be note completely hopeless. In real life a detector would be useful if it could classify all programs into certainly malicious, certainly non-malicious and uncertain.
The biggest problem is definition of what to consider malicious -- I'm pretty skeptical about such a prospect.