When I create a CLR Table-valued function in C# returning IEnumerable, to return rows of data, and the table definition contains a decimal (e.g. "decimal(10,4)") values seem to be rounded to integer values when they're actually produced: the data type is still decimal(10,4) but the value will contain no fractional part.
This seems to have started happening recently: code which has been working for years gets it wrong if I recompile it and redeploy it today. "float" works fine; it's just "decimal" which seems to be affected. Existing code (compiled months or years ago and deployed then) still works fine.
Anyone else noticed this?
Here's a test example: in this case, "select * from Example()" will truncate the decimals, returning Dan: 36, Dave: 72:
using Microsoft.SqlServer.Server;
using System.Collections;
using System.Data.SqlTypes;
public partial class UserDefinedFunctions
{
[Microsoft.SqlServer.Server.SqlFunction(
DataAccess = DataAccessKind.Read,
SystemDataAccess = SystemDataAccessKind.Read,
IsDeterministic = true,
FillRowMethodName = "FillTest",
TableDefinition = "Name varchar(50), Value decimal(10,4)")]
public static IEnumerable Example()
{
return new[]
{
new Person("Dan", 36.294),
new Person("Dave", 72.492)
};
}
private class Person
{
public Person(string Name, double Value)
{
this.Name = Name;
this.Value = Value;
}
public string Name;
public double Value;
}
private static void FillTest(object Data,
out SqlString Name,
out SqlDecimal Value)
{
Person P = Data as Person;
Name = new SqlString(P.Name);
Value = new SqlDecimal(P.Value);
}
}
What I have tried:
I've tried using "float" instead of the "decimal" type, using "SqlDouble" in the Fill procedure, and that works. But I'd like to use the "decimal" type because it allows me to specify exact numbers of digits in the output.