|
The following exception occurred in the DataGridView:
System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection.
Paratmeter name: index
at System.ThrowHelper.ThrowArgumentOutofRangeException(...)
at Sstem.ThrowHelper.ThrowArgumentOutofRangeException()
at LumenWorks.Framework.IO.Csv.CachedCsvReader.CsvBinding list.get_Item(Int32 index)
at LumenWorks.Framework.IO.Csv.CachedCsvReader.CsvBinding list.System.Collections.IList.get_Item(Int32 index)
at System.Windows.Forms.CurrencyManager.get_Item(Int32 index)
at System.Windows.Forms.DataGridview.DataGridViewDataConnection.GetError(Int32 rowIndex)
To replace this default dialog please handle the DataError event
My Form1 has just a Button1 and a DataGridView1 and code lists:
Imports System.IO
Imports LumenWorks.Framework.IO.Csv
Public Class Form1
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
Dim csv As New CachedCsvReader(New StreamReader("H:/DB60630.DAT"), False)
Me.DataGridView1.DataSource = csv
End Sub
End Class
It's able to get the data in the DataGridView1 when execute but when I scroll to the end it gives the above exception message. I'm a beginner in .NET, sorry if that is an obvious mistake.
My DB60630.DAT looks like:
40,2006,180,1030,17.26,53.65,35.49,1.333,0,0,0,.018,17.26,53.65,35.49,1.296,17.26,53.65,35.49,1.351,3.392,152,5.857,1183,-449.3,-192.3,174.3,81.6,32.66,1052,-841,-336.1,2037,-350.6,-143.8,17.73,.01,17.68,17.75,13.15
40,2006,180,1045,17.27,53.65,35.49,1.372,0,.005,0,.01,17.27,53.64,35.49,1.357,17.27,53.65,35.49,1.388,3.288,155.9,5.354,1122,-415.8,-174.1,163.4,71.8,31.55,947,-689.7,-307.4,1888,-328.8,-127.9,17.73,.009,17.71,17.77,13.23
40,2006,180,1100,17.29,53.64,35.49,1.399,0,.005,0,.007,17.29,53.64,35.49,1.39,17.29,53.65,35.49,1.409,2.575,152.5,5.996,1071,-388.2,-162.7,146.4,58.38,27.98,811,-643.3,-263.3,1710,-275.8,-96.2,17.73,.008,17.71,17.74,13.57
40,2006,180,1115,17.31,53.64,35.49,1.423,0,.005,0,.006,17.31,53.64,35.49,1.413,17.31,53.65,35.49,1.433,6.52,147.3,4.784,741,-257.6,-110,17.8,13.6,7.35,710,-297.2,-127.4,802,-232.6,-92.2,17.73,.007,17.71,17.74,13.47
40,2006,180,1130,17.33,53.65,35.49,1.419,0,.003,0,.012,17.33,53.64,35.49,1.401,17.33,53.65,35.49,1.433,6.779,151.5,4.196,654.6,-224.9,-98.1,26.28,17.04,8.01,614.5,-267.8,-118,708,-188,-80.8,17.72,.01,17.66,17.74,13.39
40,2006,180,1145,17.33,53.65,35.49,1.414,0,0,0,.011,17.33,53.65,35.49,1.401,17.33,53.65,35.49,1.43,5.492,156.3,5.084,718,-253.9,-112.9,53.88,32.81,13.28,674.8,-335.9,-151.7,879,-101.6,-95.7,17.72,.008,17.7,17.74,13.32
40,2006,180,1200,17.34,53.66,35.5,1.392,0,.005,.004,.009,17.34,53.65,35.49,1.38,17.34,53.66,35.5,1.412,5.45,158.1,4.495,917,-337.8,-150.1,213.4,90.3,40.53,700,-561.4,-242.9,1410,-236.1,-104.1,17.72,.009,17.69,17.74,13.26
40,2006,180,1215,17.37,53.67,35.5,1.36,.005,0,.003,.015,17.36,53.67,35.5,1.338,17.37,53.67,35.51,1.385,6.27,147.1,8.14,1580,-629.7,-283.4,440,190.8,85.8,1037,-1343,-540.9,2609,-389.4,-175.5,17.72,.009,17.7,17.77,13.21
40,2006,180,1230,17.35,53.7,35.53,1.336,0,.003,0,.009,17.35,53.69,35.53,1.322,17.35,53.7,35.53,1.353,6.439,153.1,4.061,957,-369.6,-164.4,455.9,188.6,82.3,689.5,-1122,-522.5,2638,-236.1,-106.1,17.71,.013,17.68,17.74,13.16
40,2006,180,1245,17.34,53.69,35.52,1.295,0,.004,0,.009,17.34,53.68,35.52,1.284,17.34,53.69,35.52,1.312,6.029,150.8,4.292,721,-286.4,-129.7,63.49,31.41,12.91,648,-445.1,-187.9,1041,-238.4,-108.1,17.71,.019,17.64,17.75,13.19
40,2006,180,1300,17.39,53.68,35.51,1.253,.005,0,0,.013,17.38,53.68,35.51,1.235,17.39,53.68,35.51,1.273,5.659,1
Thanks for any help!
Jason
|
|
|
|
|
Humm, unless there was a problem with copy/paste, your last record is missing about half of the fields ...
With this input data, the latest build will correctly throw a MissingFieldCsvException unless you specify another action to be taken.
I will post an update to the article no later than this week.
Sébastien
Intelligence shared is intelligence squared.
Homepage : http://sebastienlorion.com
|
|
|
|
|
Thanks Sebastien.
Need to check my office workstation tomorrow to ensure if that's a copy/paste problem.
I'll test again after dinner.
Is the lastest build you're referring to the one downloadable here?
Cheers,
Jason
|
|
|
|
|
Sorry ... I was referring to the code on my computer
Sébastien
Intelligence shared is intelligence squared.
Homepage : http://sebastienlorion.com
|
|
|
|
|
Hi Sebastien,
Thanks for your great work. I have just tested on my home computer. It works fine. Maybe I didn't noticed that the csv file I used at work is truncated somewhere.
But it's still highly desirable to have the MissingFieldCsvException ... to accommodate invalid data lines. So you will release that latest build by this week?
I'm so new to .NET and still learning to handle database operations. Could you please briefly tell me how to use this csv reader to read data in a csv file and then update them to a table in Access?
My intention is to update the whole as a DataTable but not add in record row by row.
Thank you so much!
Cheers,
Jason
|
|
|
|
|
If you want to use ADO.NET to update an Access table with the content of a CSV file, then I would recommend using the OLEDB provider instead and not this class. Unless you have millions of records, but then I think Access would choke
You can see an example of loading a DataTable with the content of a CSV file in the OLEDB benchmark included in the demo program. As for updating the Access table, you can then use a System.Data.OleDb.OleDbDataAdapter . Search around on the net and you will see plenty of examples on how to use a data adapter...
Good luck!
Sébastien
Intelligence shared is intelligence squared.
Homepage : http://sebastienlorion.com
|
|
|
|
|
Thanks Seb~ Just rushed some chapters in some books...Now I'm ok with that..
Thank you. Would be using OleDb for the CSV.
You've got a nice blog and a beautiful girlfriend(or wife)..:PPP!
Cheers,
Jason
|
|
|
|
|
Hi Sebastien,
Need to ask you a question...
My project's specification changed in a very critical way. The raw data is going to append to a same file every day instead of creating a new file for every collection.
In this case, it's really not effecient to read in the whole csv file every time. Is there any way to read a CSV file from a certain line onwards?
Thanks!
Jason
|
|
|
|
|
You can call ReadNextRecord() N times until you reach the first new record or you can advance the underlying StreamReader before passing it to the constructor of the CsvReader. If you can keep track of the end position of the last read (should be the last file size), then the later option looks like the most efficient.
I am not aware of how (if at all possible) to do that with the OLEDB provider.
Sébastien
Intelligence shared is intelligence squared.
Homepage : http://sebastienlorion.com
|
|
|
|
|
Thanks a lot Sebastien!
I am using the later option now and it's working fine.
Thanks for answering so many beginner's questions
Cheers,
Jason
|
|
|
|
|
Hi Sebastien,
I'm facing a problem.
In the raw data CSV file, I may have lines like this:
<...Data that have been imported..>
<...By last file size, advance StreamReader instance up to here...>
,.005,.007,20.43,54.06,35.8,.487,20.45,54.06,35.81,.509
40,2006,134,530,20.48,54.06,35.8,.55,.005,0,.005,.004,20.47,54.06,35.8,.544
40,2006,134,545,20.5,54.06,35.81,.605,.003,0,.005,.003,20.5,54.06,35.8,.601
40,2006,134,600,20.53,54.06,35.81,.678,.005,0,0,.003,20.52,54.06,35.81,.676
...
So my csv.FieldCount (No headers) will be 11(1st line), and always be 11.
Actually I would like to read the raw data to display in a DataGridView control and let user see that the 1st line is invalid.(This truncated data is caused by modem dropping connection sometimes. It will only be occurred on the first line each time appending to the raw data file.)
Could you please give me some hints on this?
Thanks!
Cheers,
Jason
|
|
|
|
|
About half of the CVSs I read are compressed. With sharpziplib, I can read them directly, without decompressing to a temporary file, by wrapping ZipInputStream with a StreamReader and passing the reader to the CsvReader constructor. This was working fine with the previous version of CsvReader, but the new version changed the way the buffer size is calculated. I am curious why the change. For the current version, it looks like it is trying to create a buffer that is the size of the file. Or maybe StreamReader.Length returns the size of the reader's buffer. In any case, ZipInputStream.Length always returns 0. I'm not sure if this is a problem in ZipInputStream or CsvReader, but here is how I fixed it:
if (reader is StreamReader)
{
long length = ((StreamReader) reader).BaseStream.Length;
_bufferSize = (int) Math.Min(bufferSize, Math.Max(bufferSize, length));
}
else
_bufferSize = bufferSize;
|
|
|
|
|
What I want to avoid is creating a buffer larger than the file. At first glance, I would say this is a problem with ZipInputStream but anyway, I will use your fix to handle such cases. Thanks again !
Sébastien
Intelligence shared is intelligence squared.
Homepage : http://sebastienlorion.com
|
|
|
|
|
Oh yeah, that makes sense.
I agree that it's goofy that ZipInputStream doesn't support Length. The information is available for Zip archives. Maybe there was concern about returning compressed length vs. uncompressed length. You can get these values from ZipEntry. For ZipInputStream, I would say Length should be the uncompressed length because the stream returns uncompressed data, but this is a point to be made at the sharpziplib project.
|
|
|
|
|
A new version has just been posted that handles stream readers that reports a zero length.
Sébastien
Intelligence shared is intelligence squared.
Homepage : http://sebastienlorion.com
|
|
|
|
|
If you use tab as the delimiter and trimSpaces is enabled, it causes problems with empty fields. To work around this I made the following change:
private bool IsWhiteSpace(char c)
{
if (c == _delimiter)
return false;
if (c <= '\x00ff')
return (c == ' ' || c == '\t');
else
return (System.Globalization.CharUnicodeInfo.GetUnicodeCategory(c) ==
System.Globalization.UnicodeCategory.SpaceSeparator);
}
|
|
|
|
|
Good catch, I will fix that and make associated unit test. Thanks !
Sébastien
Sébastien
Intelligence shared is intelligence squared.
Homepage : http://sebastienlorion.com
|
|
|
|
|
|
A new version has just been posted that correctly handles whitespace delimiters.
Sébastien
Intelligence shared is intelligence squared.
Homepage : http://sebastienlorion.com
|
|
|
|
|
After copying SampleData1 into my app, I get the following error msg:
A field or property with the name 'First Name' was not found on the selected datasource.
What am I missing?
Thanks
Matthew
|
|
|
|
|
Unfortunately, I mistested the complex binding in ASP.NET 2.0. I was using a custom datagrid (derived from GridView ) and assumed that the grid had the same binding behavior as its parent.
You can refer to my earlier comment for more information.
Sébastien
Intelligence shared is intelligence squared.
Homepage : http://sebastienlorion.com
|
|
|
|
|
Hi,
I'm having problems with some special characters (Brazilian Portuguese), they are not shown, they are simply ignored.
I thing changind the Enconding to System.Text.Encoding.Default solves the problem.
How can I do that?
Thanks.
PS: Really nice work!!
|
|
|
|
|
When creating the TextReader , you should specify the text encoding of your choice.
using (CsvReader csv = new CsvReader(new StreamReader(File.Open(path, FileMode.Open), Encoding.Unicode, false), true))
{
}
You could also try to use new CsvReader(File.OpenText(path), true)) which is a bit shorter ...
Sébastien
Intelligence shared is intelligence squared.
Homepage : http://sebastienlorion.com
|
|
|
|
|
Now all is fine.
Thank you very much.
|
|
|
|
|
|