Introduction
Some time passed since optional arguments are introduced with Visual C# 2010 and we all got used to the convenience of not having to define method overloads for every different method signature. But, recently, I came across a limitation using optional arguments on enterprise solutions and now I’m using it with care.
The limitation is that if you use the optional arguments across libraries, the compiler will hard code the default value to the consumer and prevent you from re-deploying the provider library separately. It is a very common scenario on enterprise applications where we have many libraries with different versions married to each other and this limitation makes it impossible to re-deploy only one DLL without re-deploying all related libraries.
In this post, I will explain to you the details of this limitation. As I mentioned in my previous post, Under the hood of anonymous methods in C#, it is very important to know the underlying architecture of a functionality you are using. This post is similar to that by explaining mechanics of the optional arguments.
Background
I will not explain what optional arguments are, because it is already out since a while. But I will just give a quick reference to the description for those who are new to C#:
“Optional arguments enable you to omit arguments for some parameters. … . Each optional parameter has a default value as part of its definition. If no argument is sent for that parameter, the default value is used. Default values must be constants.” (Named and Optional Arguments (C# Programming Guide))
I ran into this limitation while I was using some functionality from a secondary library, injected through an IoC container. The secondary library was accessed through an interface where some methods had optional arguments. I had everything deployed and working until I had to make some changes to the secondary library and alter the optional argument. After re-deploying the secondary library, I figured out that the changes did not take effect and when I went into the IL code, I figured out that the main library had the constant hard-coded into it.
In my situation I had interfaces, injections and a lot of complexity; to better picture the situation I will be using the simplest form of the limitation as in the following sample:
This works very well, because ProjectA
is just using arg1
value as “none
”. Now let’s look at what is happening behind the scene:
If we compile ProjectA
along with ProjectB
and analyze the IL code, we will see that the optional argument is nothing more than a compiler feature. Because calling Test()
is no different from calling Test(“none”)
, the compiler decides to compile our code as Test(“none”)
. That can be seen in the IL code and disassembled C# code below; the string
constant “none
” is hard-coded into ProjectA
.
.method private hidebysig static void Main(string[] args) cil managed
{
...
L_0008: ldstr "none"
L_000d: callvirt instance void [ProjectB]ProjectB.Class1::Test(string)
...
}
private static void Main(string[] args)
{
new Class1().Test("none");
}
For libraries tightly coupled or in-library usage, it is good that the compiler helps us in eliminating some code and makes our life easier. But this comes with a price:
Let’s say we had to modify the Test
method in ProjectB
as Test( string arg1 = “something” )
and re-deploy it without re-deploying ProjectA
. In this case, ProjectA
would still be calling the Test
method with “none
”.
Conclusion
Knowing this, it is good to use the optional arguments with caution across libraries when you have to support deployment scenarios with partial solutions.
CodeProject