Hi Pallab,
using
file based merge sort would be the way to go for you.
Please read this blog entry:
http://splinter.com.au/blog/?p=142
This merge sort works by breaking one big file into smaller chunks.
These are conventionally sorted and written to disk. Then in a final
operation these chunks are merged into one big sorted file. This way
you can keep the memory footprint quite small even for very big files.
The larges piece of information to hold in memory at a time is the size
of one of the chunks the original was broken into.
(Pretty neat isn't it: This algorithm is from way back in the 60/70's when main memory was a very costly thing.)
Modification:
Forgot this in my original answer. After the file based merge sort is done you'll have a sorted file with maybe some duplicates. So you open this file scan through it line by line always remembering the last unique entry. If the next line read contains the same number as the last one it is not output to the final file. Lather, rinse, repeat ...
End of Modification
Cheers,
Manfred