|
Hmmm...I was asking for the use of 'where T:class '. Anyhow I got it from google about the Constraints on Type Parameters.
Thanks for the information.
"Don't worry if it doesn't work right. If everything did, you'd be out of a job." (Mosher's Law of Software Engineering)
|
|
|
|
|
But you still didn't say why you are trying to do this? Casting Int32 to a List<> doesn't make much sense, so it is natural that you will get this error.
There may be a better solution to your problem, so try to explain what is the actual problem that got you thinking about casting with generic parameters?
|
|
|
|
|
Hi all,
I'm wondering why C# imposes restrictions on assignments using output parameters that are not otherwise present. Presumably there is some logical reason for it, but I can't think of it.
Assume this code:
public interface SomeInterface { }
class A : SomeInterface
{
public SomeInterface Field;
void foo()
{
Parse(out this.Field);
}
static public void Parse(out A something)
{
something = new A();
}
}
This code will not build - instead, the compiler will inform you that The best overloaded method match for A.Parse(out A)' has some invalid arguments . Since A implements SomeInterface, Field is of course assignable to A, but nevertheless it does not build. If I instead write the code so that Parse returns A and the assignment is done in foo(), it builds without as much as a warning, as it should.
void foo()
{
Field = Parse();
}
static public A Parse()
{
return new A();
}
This causes some minor annoyances for me as I'm using the compiler-compiler Coco/R to generate a parser for a language of my creation. This tool lets me specify the parser by means of what's called an attributed grammar, and the generated code uses the attributes in the grammar and maps them to parameters. What I wanted to do was to use an output parameter for most productions in the grammar, like this:
Term<out term="" t=""> = (. t = new Term(); Factor f; .)
Factor<out f=""> (. t.Mul(f); .)
{
'*' Factor<out f=""> (. t.Mul(f); .)
| '/' Factor<out f=""> (. t.Div(f); .)
}
.
This straightforwardly leads to a generated method for parsing a Term that looks like this:
void Term(out Term t)
{
t = new Term(); Factor f;
Factor(out f);
t.Mul(f);
while (la.kind == 15 || la.kind == 16)
{
if (la.kind == 15)
{
Get();
Factor(out f);
t.Mul(f);
}
else
{
Get();
Factor(out f);
t.Div(f);
}
}
}
My little problem arises because I've modeled Factor as an interface type. I want to be able to later on write a new class that performs some calculation or other, be free to derive this class from any class I wish, and so I've defined a simple interface IScalar { decimal GetValue(); } so that any class implementing it can be used as a factor.
I've modeled arithmetic expressions as a list of terms with + or - between them, a Term as shown above as a list of factors separated by * or /, and a Factor as a number, built-in function of the language, variable, or '(' Expr ')' - this model is sufficient to get correct operator precedence and handle nested expressions correctly.
It's not much of an issue as I can declare the output parameter to be of the interface type (these methods are private and only used internally in the parser after all), but I do wonder why C# imposes this restriction on output parameters. Or even if it really does - I haven't checked the language specification and it could be that it's the compiler rather than the spec that restricts me, though it seems a stretch.
|
|
|
|
|
I've been staring at your bit of code for about ten minutes now. I feel like I should know the answer to this but I'm not sure I do. Essentially I guess it's that .NET can't do upcasting when using Out parameters and an exact type match is required.
Can't you do something with the compiler-compiler not to use this construct. I rather dislike out parameters and try to avoid them? Either that or use a local variable of the correct type then assign the interface type to that?
Regards,
Rob Philpott.
|
|
|
|
|
I can't do anything not to use this construct if I want the method parsing a production to instantiate the object - there's no way to specify that I want to use a return type instead of an output parameter. I can use normal in parameters and also ref, but it doesn't solve the problem.
I can use a local variable in the parsing method, but then it kind of nullifies the point of using polymorphism - I didn't want to have to write any code for each specific type. It's not really a lot in this case, but I think I'd prefer to just make a method that parses a number "return" (i.e. use an output parameter of type) IScalar rather than Number.
The exact same issue exists if I use inheritance instead of interfaces to achieve polymorphism. That is, if a field is of type BaseClass and a method has an output parameter of type DerivedClass, I cannot assign the field using an out parameter. So I need to put the knowledge of what specific type it is in every context it is used.
A possible solution would be to omit the parameters completely and make a stateful parser. I could put expressions onto a stack as I parse them (Coco supports "semantic actions", which is any C# code I wish to insert when something is parsed, so I can call custom methods in my parser - this is usally used for things like looking up in symbol tables or even emitting opcodes). Then, when I parse a number, it would find the Factor on the stack and assign the scalar to it. But it's a pity to have to create a stack that basically just recreates the structure that is already present on the call stack! That, after all, is the basic idea behind a recursive-descent parser!
I'm asking more out of curiosity (I try to see stuff I don't understand as an opportunity to learn something new) than anything else. The minor friction it creates in the parser code is overcome by pragmatically changing so I use the interface type everywhere, e.g. <code>Number(out IScalar s)</code> instead of <code>Number(out Number n)</code> and I'm good to go. I don't really see any problems with this. If I want to make something like a syntax highlighter later on this logic would anyway have to examine the run-time type of the Scalar field, since this has to accept any type of scalar.
Thanks anyway!
|
|
|
|
|
dojohansen wrote: Since A implements SomeInterface, Field is of course assignable to A, but nevertheless it does not build.
Assuming the compiler allowed that, guess what would happen if someone wrote
interface ISome {}
class SomeA : ISome {}
class SomeB : ISome {}
void Method()
{
SomeA a = new SomeA();
OutMethod(out a);
}
void OutMethod(out ISome some)
{
some = new SomeB();
}
Boom, a variable typed to be SomeA is now referring to an object of type SomeB , something that is not legal.
|
|
|
|
|
For some reason I never got a notification of this reply and I only read it now after tumbling around the "my codeproject" pages.
Therefore the great delay in saying Thank you, Sir - it makes perfect sense now! In fact, I should have thought of it. But I did not.
|
|
|
|
|
I spoke too soon.
Unfortunately your example doesn't explain anything. The assignment your code example does is unsafe (based on the declared types), and it would not compile if you did the same assignment using return values instead of output parameters.
I should have spotted this immediately, because that's another way of posing my original question: Why should output parameters have different assignability than return values?
Too see why your example misses the point, consider if we modified your code like this:
void Method()
{
SomeA a = Method2();
}
ISome Method2()
{
return new SomeB();
}
Here, and in your example with the out parameters, we're attempting to assign a field of type SomeA to a reference of type SomeB, which is unsafe. But in my original code the assignment I'm trying to do is known to be safe, as you can verify since the same assignment done from a return value rather than an out parameter builds just fine (and indeed runs fine).
Nor has it got anything to do with the fact that I pass a field rather than a local variable; the same issue arises if I use a local. The compiler complains it can't convert the concrete type to the interface type, but it plainly can and does so if I declare the out parameter as having the interface type...
It makes no sense to me, and I'm still hoping someone can shed light on this.
|
|
|
|
|
dojohansen wrote: and it would not compile if you did the same assignment using return values instead of output parameters.
It would compile if you cast the return value to SomeA
SomeA a = (SomeA) Method2();
dojohansen wrote: But in my original code the assignment I'm trying to do is known to be safe, as you can verify since the same assignment done from a return value rather than an out parameter builds just fine (and indeed runs fine).
I guess I used the wrong example - here's what you are trying to do
interface ISome {}
class SomeA : ISome {}
class SomeB : ISome {}
void Method()
{
ISome some = SomeB();
Method2(out some);
}
void Method2(out SomeA some)
{
some = new SomeA();
}
The problem here is ISome need not refer to SomeA, it could refer to any class that implements ISome, but Method2 is expecting SomeA. In fact, the code wouldn't compile even if you removed the out keyword.
The code would compile if you changed the parameter of Method2 to out ISome some instead.
Casting to SomeA when calling Method2 would also work, but not if the parameter is out . The variable you're passing as out needs to be assignable, and the cast expression is not assignable.
And about it working for return types, welcome to the world of covariance and contravariance[^]. Return values are covariant, method parameters are contravariant, which means that return values could be more derived than what the caller expects, whereas method parameters could only be equal or less derived than the caller's parameters.
|
|
|
|
|
I'm afraid you are still not getting my question. I'm fully aware that it doesn't compile if I use the specific type that the function actually assigns to the out parameter as the declared type of that parameter. But my point is that I see absolutely no reason why it should not.
Again, if you simply frame the issue as "why should (output/ref) parameters have different assignability compared to return values?" it should become clear that your reply does not answer the question.
One detail while I'm at it: When using the output parameter it makes little sense to initialize the variable; just declare it. But let's not go off topic on that one.
|
|
|
|
|
Hmm. This was your original code sample.
public interface SomeInterface { }
class A : SomeInterface
{
public SomeInterface Field;
void foo()
{
Parse(out this.Field);
}
static public void Parse(out A something)
{
something = new A();
}
}
You're asking why the compiler doesn't allow this.Field to be passed as an output parameter to Parse, which takes a type that implements SomeInterface as a parameter, right? Whereas it works if Parse returns an instance of A and foo stores it in this.Field, right?
dojohansen wrote: Again, if you simply frame the issue as "why should (output/ref) parameters have different assignability compared to return values?"
No, that's not the issue. The issue is that method parameters have different assignability compared to return values.
It is because method parameters and return values have different rules when it comes to what type they can be. Regardless of the fact that out parameters are used to simulate return values, they are method parameters to the language, and are therefore subject to method parameter rules.
The(simplified) rules are
1. Return values can be more derived than what the caller expects.
ISomeInterface isi = Parse(...);
<Any_Type_That_Implements_ISomeInterface> Parse(...) {}
2. Method parameters should be the same or less derived than what the caller is passing
Parse(this.Field);
void Parse(<ISomeInterface_Or_Any_Class/Interface_ISomeInterface_derived_from> something){}
A little thinking will tell you why the rules exist.
ISomeInterface isi = new A();
A a = isi;
By passing this.Field to Parse , you are trying to do what the second statement in the above snippet is trying to do, and that's why the compiler is not allowing it. Using a return value has semantics similar to the first statement, and that's why it works.
Hope this helps
|
|
|
|
|
Hi,
Thanks for trying to help out. I think I get it. I disagree with your claim that my code is trying to perform that illegal assignment. "Field" is of the interface type (the less derived type), and the out parameter is of the concrete type that implements the interface type. The only assignment that would ever be attempted if that code ran (forgetting for a second the fact that it does not compile!) is the first of your two assignments, which is indeed legal.
In C# an out parameter cannot be used in the method declaring it unless that method has assigned it; it is considered unassigned, just like any local variable that has been declared but not initialized. Because of this "strictly one-way" behavior, an out parameter *could* logically speaking support the same assignability rules as return values. A ref parameter on the other hand could not, since it can be used in both directions, thus resulting in a downcast (to a more derived type) that is implicit yet unsafe.
But then I thought for a moment about what the compiler would actually do with output and ref parameters, and it occured to me that it probably generates exactly the same IL for both! After all, the only difference is that you're not allowed to use the out parameter in the declaring method, and you are required to always assign it, both of which can be verified by the compiler but result in the same IL code (or no IL code at all if changing a ref parameter to an out parameter results in a build error - changing an out to a ref however will never cause a build error).
I think I've therefore arrived at the following conclusion:
1) There are no reasons at the abstract level why a formal out parameter could not be more derived than it's corresponding actual parameter. If the language allowed callers to use foo(out obj) where obj is declared as "object", no unsafe casting would ever result regardless of how derived the formal parameter is, e.g. foo(out int n) or foo(out CryptoStream s).
2) There are some (probably pretty good) practical reasons why parameters should all follow the same rules. If they do, formal parameters must necessarily be less derived than actual ones.
|
|
|
|
|
Yeah, the issue is that out is simply an attribute affixed to a parameter, it doesn't affect type checking in any way.
As you said, the error goes away if rewrite Parse to take the interface as the out parameter (instead of the concrete type). You only lose the ability to specify that the method sets instances of type A (or one of its derived types) to the out parameter.
Great discussion though
|
|
|
|
|
Hi all,
as stated in the subject line can we say function overloading as polymorphism??
According to some oops authors overloading is compile time and overriding is runtime polymorphism!!!
However, if we visit the definition of Grady Booch for Polymorphism is "One interface many implementation",which means that overloading is not a type of polymorphism! (as interface gets changed when we change the function parameter)
What do you say?
What is your opinion?
Deep
Happy coding
|
|
|
|
|
Overloading and overriding are different concepts; the later being more the polymorphism.
class EggSample
{
void Method()
{
}
void Method(string param)
{
}
void Method(int count, string param)
{
}
}
That is the overloads - same method name with a different signature in a class.
Overriding would be:
class HenOther : EggSample
{
void Method(string param)
{
}
}
It's like an inbred hillbilly family - they all look the same but each behaves in a slightly odd way.
Panic, Chaos, Destruction.
My work here is done.
|
|
|
|
|
Well that's not actually an override. What you did is method hiding, and the compiler will warn you and say you should put the "new" keyword on the method declaration if the hiding was intended.
Polymorphism is incredibly powerful and can greatly simplify the design of many a thing. It is, in my view, the single most important concept of OOP, even more important than encapsulation (though that is certainly important too).
Virtual methods are said to be "late bound" and non-virtual methods "early bound". What this means is that when the compiler creates the CIL (formerly MSIL) code for your C# or other .net code, for non-virtual methods it will create code that invokes a specific method. Which method is called is determined by looking at the declared type of the reference to the object (or the type indicated for static methods, which obviously can never be virtual or behave polymorphically).
The above is NOT polymorphic. Given this code:
EggSample obj = new HenOther();
obj.Method("param");
We are allowed to do this of course, because HenOther extends EggSample. However, the compiler will create code to invoke EggSample.Method(), not HenOther.Method(), because "obj" (my reference) has the declared type EggSample, even though it's run-time type is in fact HenOther.
Virtual methods on the other hand are handled differently. The compiler will generate code to look up the method to call at run-time from what's called the type's VMT - Virtual Method Table. This means there's a small performance hit involved in invoking virtual methods, but that the method called is determined by the run-time (actual) type of the object rather than the declared type.
This is extremely useful in lots of situations, because it allows us to divide and conquer in ways we never could without it. Imagine you want to create a data transformation system that can read input files, process the input and compute statistics or whatever, and transform flat files to xml and lots of stuff. You'd have a gazillion ways to do this of course, but one approach could be to build a tree structure where each node in the tree represents some operation on the data and the structure of the tree creates a breakdown of the work. You could then use polymorphy to great effect. You could use either an interface (which is of course inherently polymorphic) or a simple base class, like this:
abstract public class ProcessingNode
{
public List<processingnode> Children = new List<processingnode>();
virtual public void PreProcess(Data d) {}
virtual public void PostProcess(Data d) {}
public void Execute(Data d)
{
PreProcess(d);
foreach (ProcessingNode node in Children) node.Execute(d);
PostProcess(d);
}
}
Purists might argue that the Pre- and Post-process methods should be abstract, but I prefer providing a default implementation that does nothing. Then, if I derive a node that only does postprocessing (that is, processing that occurs AFTER the subtree has finished processing) I don't need to override the PreProcess method (I would have to if it was abstract in the base class.
Now you'd have a very extensible model where you can derive lots of different processing nodes that perform various operations. For example, you could create a Selection node that selects some subset of data, and an Aggregate node that computes an aggregate on the selected data. The Selection node would do something like d.Select(...); on preprocess and d.Unselect(); in the postprocess method. The subtree inserted into the selection node would process only the current selection rather than all the data.
As a final example, you could create an "UpperCase" processing node as easily as this:
public class Upper : ProcessingNode
{
override public void PreProcess(Data d)
{
d.Selection.Text = d.Selection.Text.ToUpper();
}
}
assuming, for brevity, that the Selection has a text representation you may modify. As you can see, the base class can make use of new processing nodes you add without any of this code knowing about the types. All it knows is that the object has a capability to preprocess and postprocess, and there is no need to write a bunch of code to call this method if the operation is "Upper" and another if it's something else. You could go ahead and add encryption nodes, image decoders/converters, translation nodes, indexing nodes, sniffers looking for product names or companies, and a million other things. And if you released your system as an assembly, as long as the base class is public, other people could add new types of processing nodes as well.
And that's the power of polymorphism!
|
|
|
|
|
I didn't do that right
I was coding 'freehand' without the use of an IDE or compiler and I was writing some java stuff last night where the above (with the correct syntax for String ) would work as described.
But polymorphism it is, that's the whole point of it. Different classes being able to accept the same methods and behave differently depending on their own internals.
woteva.ToString(); will behave in the correct way for the class woteva - whatever it may be.
But then overloading is also a form of 'lotsaforms' .
Panic, Chaos, Destruction.
My work here is done.
|
|
|
|
|
Indeed, you didn't do that right. Not sure what you mean by "polymorphism it is", but I hope you're not clamining that the code you posted results in polymorphic behavior. It doesn't, as you can verify for yourself using this code:
public class A
{
public void M1() { Debug.WriteLine("A.M1()"); }
virtual public void M2() { Debug.WriteLine("A.M2()"); }
}
public class B : A
{
new public void M1() { Debug.WriteLine("B.M1()"); }
virtual public void M2() { Debug.WriteLine("B.M2()"); }
}
class Program
{
static public void Main(string[] args)
{
B b = new B();
A b_declared_as_a = b;
b.M1();
b.M2();
b_declared_as_a.M1();
b_declared_as_a.M2();
}
}
Now, we have two references to a single object instance. The non-virtual method M1() is called based on the *declared* type of the object - thus A.M1() executes when we invoke b_declared_as_a.M1() - because that reference is declared as type A.
The virtual method M2() however always results in running B.M2(), because that's the run-time type of the object and the declared type is irrelevant for virtual method calls.
ToString() is indeed virtual, and that's great since it lets you do things like string formatting more easily. The default implementation returns the type name, so if you call ToString() on b above you get the name. If you override the method you get whatever you return in the overridden method. Obviously there's no code in object.ToString() that somehow obtains a reference to the instance of type B and calls ToString() on it. The reason it works is because the code generated for ToString() isn't a normal method call, but a lookup in the VMT for the type (B in this example) and then a dynamic invokation. This allows you to write code today that call methods you only create in the future, perhaps in another assembly.
Method hiding however is really not much more than having two types that both have a method with the same name.
|
|
|
|
|
Overloading is just calling different methods with the same name by different parameter signatures. I wouldn't call this polymorphism. Really not. As said, this is resolved at compile time.
Overriding a virtual method however is the mainstay of polymorphism.
Regards,
Rob Philpott.
|
|
|
|
|
anybody else here who like to share his/her thought on this topic??
|
|
|
|
|
I completely agree with Booch. While people may use words whatever way they like as far as I'm concerned, "polymorphism" loses it's meaning if the interface changes. Technically there's no difference between methods f() and g() compared to f() and f(int). They're simply different methods. Overloads are useful, but only for the human users of the code. If I offer multiple ways to do the same thing, such as specify a timeout using either the number of milliseconds (int) or a TimeSpan, it is obviously easier for the user to cope with two methods with the same name instead of getting potentially far more method names in his intellisense list. But that is also the extent of it's usefulness - it has nada to do with polymorphism/virtual method invokation/late binding.
I suppose my opinion ought to be expected if you read my entries elsewhere in this thread.
|
|
|
|
|
Hey guys, please consider the following code
...
RequestSize = Int32.Parse(FinalLength);
strMessage = strMessage.Remove(0, 1);
string command = strMessage.Substring(0, RequestSize);
strMessage = strMessage.Remove(0, RequestSize);
ProcessCommand(command);
This works perfectly, unless the command variable is upwards of 30000 characters long
If the command is that big, both the SubString and Remove methods just do nothing. When debugging this and hitting those calls, its like it just breaks away from those methods and then nothing. The UI thread is not getting tied up so its not just taking some time to do it...
How can I fix this?
Thanks
<edit>Please see reply to Rob Philpott for more detail</edit>
Harvey Saayman - South Africa
Software Developer
.Net, C#, SQL
you.suck = (you.Passion != Programming & you.Occupation == jobTitles.Programmer)
1000100 1101111 1100101 1110011 100000 1110100 1101000 1101001 1110011 100000 1101101 1100101 1100001 1101110 100000 1101001 1101101 100000 1100001 100000 1100111 1100101 1100101 1101011 111111
modified on Thursday, April 9, 2009 5:14 AM
|
|
|
|
|
Strange, my understanding is that strings can be huge in .NET, so I don't know why this 30,000 ceiling comes in.
I can offer no solution to that, but might suggest that rather than 'Removing' parts from such a huge string you either iterate over it to get what you need or just copy substrings.
Regards,
Rob Philpott.
|
|
|
|
|
Rob Philpott wrote: I can offer no solution to that, but might suggest that rather than 'Removing' parts from such a huge string you either iterate over it to get what you need or just copy substrings.
Here's what happens exactly
After i get a server response from a TCP connection, the listener thread sends back all that data via a callback. Now one of these callbacks may have more than one command in it.
Each command starts with ln=[length of command][terminator]
So I loop to find a ln= , when I do I remove it, and then parse the length of the command to come. Then I take that command out (sub string it so i can pass it to another method), and remove it from the pool of commands so that the loop may continue in case there's another command to execute. Here's the full methods code
private void ProcessMessage(string CommandPool)
{
RawResponseReceived(CommandPool);
while (CommandPool != string.Empty)
{
if (CommandPool.StartsWith("ln="))
{
CommandPool = CommandPool.Remove(0, 3);
int RequestSize = -1;
string FinalLength = string.Empty;
while (Int32.TryParse(CommandPool[0].ToString(), out RequestSize))
{
FinalLength += RequestSize.ToString();
CommandPool = CommandPool.Remove(0, 1);
}
RequestSize = Int32.Parse(FinalLength);
CommandPool = CommandPool.Remove(0, 1);
string command = CommandPool.Substring(0, RequestSize);
CommandPool = CommandPool.Remove(0, RequestSize);
ProcessCommand(command);
}
else
{
Console.Write("Char removed - " + CommandPool[0]);
CommandPool = CommandPool.Remove(0, 1);
}
}
}
NOTE: I just changed some of the variable names to make it more readable
The reason these strings are so big sometimes, is that I'm getting binary data in it aswel. Can this be causing the problems?
Harvey Saayman - South Africa
Software Developer
.Net, C#, SQL
you.suck = (you.Passion != Programming & you.Occupation == jobTitles.Programmer)
1000100 1101111 1100101 1110011 100000 1110100 1101000 1101001 1110011 100000 1101101 1100101 1100001 1101110 100000 1101001 1101101 100000 1100001 100000 1100111 1100101 1100101 1101011 111111
|
|
|
|
|
Hmm. Ok. I wouldn't have thought binary would have caused an issue, but if you're receiving data back from a TCP stream, why don't you process it in a stream manner?
So, rather than waiting for the whole response and turning into a huge string which your iteratively break down into seperate parts, you do this as the response comes in. This way, you don't have the memory footprint of the large string, and you can start processing when the data starts arriving rather than when its finished.
Something like ReadByte() check = 'l', ReadByte() check = 'n', ReadByte() check = '=', then read to terminator to extract the command length, parse that. Then, keep reading each command.
Only a suggestion, and I'm still at a loss why your having problems with your current solution.
Regards,
Rob Philpott.
|
|
|
|
|