Stop and think about what you are doing: JSON is a human readable text based transfer format, not an efficient storage format.
Suppose your object to deep close was an array of integers, containing five values:
int[] data = {int.MaxValue, int.MaxValue, int.MaxValue, int.MaxValue, int.MaxValue };
string json = Newtonsoft.Json.JsonConvert.SerializeObject(data);
In memory, the aray takes 4 bytes per entry (plus a little overhead, but we'll ignore that for this exercise): 5 * 4 = 20 bytes.
The JSON string is this:
[2147483647,2147483647,2147483647,2147483647,2147483647]
Which is 56 bytes. If you had 400,000 of these arrays the JSON string would be more than 2.5Mbyte. As your objects get more complex, more JSON "management" data is added to support deserialization, and the size of the object grows.
Now think how big your "real world" data is going to be when converted to JSON: absolutely massive.
That's probably why you run out of memory: The string result is too big and the serializer is running out of memory - I don't know what language Newtonsoft wrote the serializer in or any of the code internals, but it is seriously optimised for speed so there is a good chance it doesn't use the .NET heap in the same way you and I do!
Try it: take your 4000,000 object collection, and prune it to two items. Generate the JSON for that, and see how big the result is. Multiply that by 200,000 and you'll average out close to the final string size.
I'd suggest you go back to the generic deep clone packages you have found already, and re-read their documentation - Json is not the way to go (and XML will have exactly the same problem only much, much worse!)