You are using the same buffer for input and output. That won't work. See the
MultiByteToWideChar function (Windows)[
^].
It should be like this:
int nSize = MultiByteToWideChar(nlanguageCodePage, 0, chAnsiBuf, -1, NULL, NULL);
LPWSTR sUnicodeBuf = new WCHAR[nSize];
MultiByteToWideChar(nlanguageCodePage, 0, chAnsiBuf, -1, sUnicodeBuff, nSize);
delete [] sUniocodeBuff;
However, when having a fixed size for the ANSI input buffer, it can be also used for the output buffer because the Unicode string will never have more wide characters than the number of ANSI characters in the input string:
WCHAR wUnicodeBuf[NMLANG_MaxNBuf];
while (fgets(chAnsiBuff, NMLANG_MaxNBuf, pFile) != NULL)
{
MultiByteToWideChar(nlanguageCodePage, 0, chAnsiBuf, -1, wUnicodeBuff, NMLANG_MaxNBuf);
if (nBOM == 0) { arcOut.Write(&bom, 2); }
arcOut.WriteString(wUnicodeBuff);
nBOM++;
}
That should work. If the result is not as expected, check your other involved functions like
arcOut.WriteString()
, if the BOM is correct, and if your input file is really encoded with the code page
nlanguageCodePage
.
[EDIT]
Another possible source may be the
arcOut.WriteString()
call when it converts the Unicode string back to ANSI. You may then use a binary write instead:
int len = MultiByteToWideChar(nlanguageCodePage, 0, chAnsiBuf, -1, wUnicodeBuff, NMLANG_MaxNBuf);
if (nBOM == 0) { arcOut.Write(&bom, 2); }
if (len > 0)
arcOut.Write(wUnicodeBuff, len * sizeof(WCHAR));
nBOM++;
[/EDIT]