The question is not about how you read the file. Parsing it is one thing, finding duplicates is an other one - especially with large dataset. The strategy must depend on what you will do with this data in the future. If this is a one-time task, like find duplicates and forget it (which leads me to suspect serious design flaw), than you might look for some other method, but in general, I suggest following one:
If you already use a
local RDBMS, use that if not, chose an embedded edition RDBMS (SQL CE, Firebird embedded, but SQLite would be the best). Let's take SQLite. Create a database and a table with structure meeting the xml record structure, and don't forget to add indexes that will most likely help in finding the record. Now, start parsing the xml, and before inserting the record into the table, issue a select to check if isn't a duplicate. This way, at the and you will have only the unique records in the table. In case of SQLite you have several fine-tuning parameters you can use to optimize this dedicated database for this task. Look here:
http://web.utk.edu/~jplyon/sqlite/SQLite_optimization_FAQ.html[
^], here:
http://www.sqlite.org/pragma.html[
^] and probably here too:
http://tech.vg.no/2011/04/04/speeding-up-sqlite-insert-operations/[
^]