You still need the Rupee font, see:
http://en.wikipedia.org/wiki/Indian_rupee_sign[
^],
http://techie-buzz.com/india-tech/ubuntu-10-10-indian-rupee-font.html[
^].
As this character point was officially presented on 15 July 2009 and only recently standardized by Unicode, it wasn't integrated into any operating system yet (see the link above).
As to DOS and ASCII — just forget it. This part of question makes no sense and probably based on some misunderstanding of what Unicode is. De-confusing reading is here:
http://en.wikipedia.org/wiki/Unicode[
^],
http://unicode.org/[
^],
http://unicode.org/faq/utf_bom.html[
^].
[EDIT]
In reply to "but":
you should understand that the whole notion of "conversion" from Unicode to ASCII and ASCII to Unicode makes no sense because Unicode… is not encoding, in contrast to ASCII. (However, it depends on what do you call "Unicode", because in Windows jargon, the term "Unicode" is often used for one of the Unicode Transformation Formats (UTFs), UTF16LE.) Unicode is a standard which defines formal one-to-one correspondence between "characters" understood as some cultural categories abstracted from their exact graphics (for example, Latin "A" and Cyrillic "А" are different characters, you can test it by using text search on this paragraph) and integer numbers, abstracted from their computer presentation like size and endianess. Despite of common wrong opinion, this is not 16-bit code as the range of "code points" presently standardized by Unicode goes far beyond the range which can fit in 16 bits (called Base Multilingual Plane, BMP). As there are different integer types, there are several different ways to represent Unicode text called UTFs.
Windows internally represents Unicode using UTF16LE (and yes, UTF16, as well as UTF8 and UTF32 can represent code points beyond first 16 bits of BMP), but all the APIs are well abstracted from this fact. The UTFs appears when character data is
serialized (not "converted") into array of bytes. Character data can also be serialized into ASCII, but some information may or may not be lost, because the range of ASCII is only 0 to 127, and the character points have the same "meaning" as in Unicode. Traditionally, the lost characters (those beyond ASCII range) are "converted" to '?'. ASCII data as a array of bytes can be deserialized into character data (.NET string) and, naturally, that always goes without losses. In other word, ASCII code has one-to-one correspondence with the
subset of Unicode with code points 0 to 127.
That said, again: there is no such concept as "conversion" between ASCII and Unicode.
—SA