If newRow
can't be reused, there is no optimization possible.
If
newRow
can be reused, I would rewrite the code this way.
var newRow = ds.Tables["FACT"].NewRow();
foreach (var y in InsertRows)
{
newRow["S_ID"] = DestinationScenarioId;
newRow["G_ID"] = y["GLOBALFIELD_ID"];
newRow["AMOUNT"] = y["AMOUNT"];
foreach (var t in ValidBCodes)
{
newRow["B_ID"] = t;
ds.Tables["FACT"].Rows.Add(newRow);
}
}
[update]
If
newRow
can't be reused, simply swapping the loops allow the compiler to see that
y["GLOBALFIELD_ID"]
and
y["AMOUNT"]
are constant in the inner loop and move the reading of these values outside of inner loop.
foreach (var y in InsertRows)
{
foreach (var t in ValidBCodes)
{
var newRow = ds.Tables["FACT"].NewRow();
newRow["S_ID"] = DestinationScenarioId;
newRow["B_ID"] = t;
newRow["G_ID"] = y["GLOBALFIELD_ID"];
newRow["AMOUNT"] = y["AMOUNT"];
ds.Tables["FACT"].Rows.Add(newRow);
}
}