|
|
this is a console app that may do what you are after, replace the console.writeline with a call to your parse method
class Program
{
static void Main(string[] args)
{
string filePath = @"D:\ReadMe.txt";
using (var streamReader = new System.IO.StreamReader(filePath))
{
while (!streamReader.EndOfStream)
{
var toParse = streamReader.ReadLine();
System.Console.WriteLine(toParse);
}
}
System.Console.ReadLine();
}
}
|
|
|
|
|
ok
that this is Basically what i did
srLog is type of StreamReader
so my problem was that i didint need to read all file content to memory and then process it
I was thinking it is more efficient to read all file content (with one stroke) and work on it then to access file system for each line(ReadLine) and process it
Isn’t more expensive?
while (!srLog.EndOfStream)
{
line = srLog.ReadLine();
fileLines.Add(line);
}
fileLinesContent = fileLines.ToArray();
|
|
|
|
|
Being honest i don't know which way would perform better, your origional post involved calling ReadLine many times in a loop any way and you were getting memory issues. This way should not be less efficient than the origional proposed solution.
I'd suggest you try it with something many times larger than your expected log file size, if the performance is good then don't stress too much about how you could make it better. If the file IO part of this App is not the bottle neck in the process then don't lose too much time optimising it.
|
|
|
|
|
No, it's not. The way you were reading the file (line by line) and storing it in memory is no less expensive than processing that giant log one line at a time.
You're trying to read 700MB of data into memory and running into OutOfMemory problems. How efficient do you think that is?? By reading everything into memory all at once your solution only works on limited log sizes, dependant on system memory.
If you process every line, one at a time, without reading the entire file into memory, you can process log files of ANY size, up to the file size limit of the operating system and do it without requiring the machine to have terabytes of memory.
|
|
|
|
|
Got it
thanks all
Ronen
|
|
|
|
|
700MB is not that large, you shouldn't run out of memory if you process it efficiently, even if you store the whole thing in memory. Calling List.ToArray is going to double the memory usage, though; you should either store it as a List all the time, or read it into an array to begin with (almost certainly the former).
However, I suspect you are doing some streaming task and you don't actually need the whole thing. Parse each line as it comes, and don't store it; instead store whatever information about a line you need to know, if anything.
|
|
|
|
|
An array is a contiguous block of memory. Other than the fact that for large files all processing should be done in your while loop without retaining any of the file in memory your ToArray call could be killing you.
If you need Random Access to the File, look into Memory Mapped Files.
|
|
|
|
|
I'll repeat that you probably shouldn't be keeping a bunch of rows in memory.
However, if you really need to, I suggest defining a class to hold an entry once it has been parsed. And don't use an array; use a List or a Queue, or if you're passing it to another class, consider using an event.
|
|
|
|
|
You already know how many lines are in the file using the pageSize argument. Interesting as it suggests the file has already been read at least once. There may be memory issues there.
Can't tell what object types fileLinesContent or fileLines are but Arrays, ArrayLists and List<t> enable you to set the collections capacity. Setting the capacity, when initializing will potentially save memory.
Why not make fileLines the same type as fileLinesContent and then you can drop the ToArray() method.
As already mentioned, 700M isn't a big deal, there must be another part to this problem unless you are working on an old PC.
"You get that on the big jobs."
|
|
|
|
|
Am new to programming in general and am asked to design an app in C# that will distribute names in a database to different tables but randomly.
I need help seriously cos am on a deadline.
Any help will be duly appreciated.
Thanks
|
|
|
|
|
ayk439 wrote: I need help seriously cos am on a deadline
Then I suggest you seriously start learning. Here are some links that might help :-
ADO.Net[^] - to communicate with databases from c#.
Random Class[^] - to generate random numbers, so you can randomly pick the names from a List.
No-one is going to just do your homework for you.
When I was a coder, we worked on algorithms. Today, we memorize APIs for countless libraries — those libraries have the algorithms - Eric Allman
|
|
|
|
|
A few questions that may help people point you in the right direction to get the help you need.
1. what database, are you using SQL Server, MySQL, Access etc?
2. Where are the names comming from?
2. Do all the destination tables have the same structure?
4. Are there any restrictions on what you can use or must use, do you need a UI or is this a console application.
|
|
|
|
|
Hi,
this code sometimes works!and sometime doesnt open any page and even doesnt give any error.
i am using this code in event DoubleClick of GridView.
void AdvanceAutoGridViewList_DoubleCilickEvent(DataRow e)
{
try
{
Response.Write("<script>");
Response.Write("window.open('http://localhost:5162/WorkFlows/Forms/PersonalWork/?md=Edit&jb=11&stp=1')");
Response.Write("</script>");
}
catch(Exception ex)
{
Master.SetMessage(ex);
}
}
Thanks in advanced!
modified 31-Jul-12 2:17am.
|
|
|
|
|
Check on the browser side that the response was received (read the source of the page received).
Also note that JavaScript code is to be executed on the brwoser side, not on the server side - if you test on the computer where the web server is installed, a call to "localhost" works, but from a different computer it will fail.
|
|
|
|
|
Thanks for ans.
I knew that its not on web now(its now on localHost).
how to check that response was received (read the source of the page received)
Thanks in advanced!
|
|
|
|
|
In the browser, View menu -> Source.
You really need to pick up some beginner books on the stuff you're working with.
|
|
|
|
|
Dear respected sir, I needed asp.net project with database. database connection in Sql server. then how will connect the database some about idea based information I needed . I hope you will help me
|
|
|
|
|
For the right connection string, you can always use http://www.connectionstrings.com/[^].
If you are looking for some startup tutorials for ASP.Net / database just search the internet - you will get some good examples.
|
|
|
|
|
Assume a table called tblCompanies and another called tblUsers. Each Company can have one or more Users.
So I have this AddUser method. See the comments in-line
private int AddUser(UserEntity User)
{
int retVal = 0;
using (var dc = getDataContext())
{
var user = (from u in dc.tblUsers
where u.CompanyId == User.CompanyId &&
u.UserName.Trim().ToLower() == User.UserName.Trim().ToLower()
select u).FirstOrDefault();
if (user == null)
{
tblUser newUser = new tblUser
{
CompanyId = User.CompanyId,
RoleId = User.RoleId,
FirstName = User.FirstName,
LastName = User.LastName,
UserName = User.UserName,
Password = User.Password,
CanLogIn = User.CanLogIn
};
dc.tblUsers.InsertOnSubmit(newUser);
retVal = newUser.UserId;
}
else
{
}
}
return retVal;
}
Aside from try/catch & exceptions, what about the 2 potential FK violations on CompanyId and RoleId?
What's the right way to handle all this. I'v heard people say "Handle these issues in the BLL", while other folks seem to think exceptions related to data should be handled in the DAL?
What's your thoughts?
If it's not broken, fix it until it is
|
|
|
|
|
The general rule is, only catch an exception at the point at which you can do something with it, so if you can deal with it at the DAL, then that's perfectly fine. If you need to infrom the user and let them make a choice, then you should throw the exception back up the chain. As always, there are exceptions, but this is a good rule to start with. Ultimately, your decision is going to be driven by what satisfies your criteria.
|
|
|
|
|
Truth is, my thinking is this...
FK's should not be a problem, as the UI won't allow the user to send data to the BLL/DAL with an invalid FK selected.
So if I'm not going to handle exceptions in the DAL, which is probably gonna be the case, then any point in putting in try/catches in the DAL & BLLL at all? If my UI wraps calls to the BLL in try/catches, then I ought to be ok.
If it's not broken, fix it until it is
|
|
|
|
|
Kevin Marois wrote: What's the right way to handle all this.
Log everything you don't expect.
Kevin Marois wrote: I'v heard people say "Handle these issues in the BLL", while other folks seem to think exceptions related to data should be handled in the DAL?
Let's leave the religious argument where it "belongs" to the architecture-astronauts, and stay practical.
The user is usually the one that handles the exception if it's not something that can be ignored or retried automatically. In the case of a FK-violation, the user could be informed and asked to select some other value. In the case of a connection-problem, timeouts, whatever, there's* three retries, and if those fail, there's a dialog with common causes for that particular exception, and the option to retry (Y/N/C). Yes, that's a lot of bubbling-up that an exception has to do from a DAL
Anything the user can't handle, will be hopefully be handled by a helpdesk, or the author of the software. That's why I included the superfluous advice to log everything you don't expect.
--
(in the ideals situation)
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
Exceptions should be handled where they make sense. That's generally in the business logic, because you don't know enough in a data access method to know what a proper resolution is. However, it can make sense to catch and rethrow a new exception, if the exceptions you receive in data access are specific to the data source you're connecting to, because the whole point of layered design is that the logic layer is insulated from the actual data source, so you don't want to be leaking SqlExceptions up to it.
The logic should be designed so that 'obvious' failure cases never get to the DAL, though. In this case I'd expect a user name to be checked for existence before trying to add it, because that's a nicer time to get a message so the user can fix it. And I'd expect the company and role to be picked in a constrained way so the FK can't fail in normal circumstances. DAL methods should only fail if something unexpected happens (e.g. database goes down, race condition between clients causes the data to no longer be valid, etc). The DAL should still handle failure gracefully but it shouldn't be common.
And yes, I'd throw an exception if the user existed, though I think I'd do that just by assuming it doesn't and letting the INSERT fail in the case that it does. Note that your current code can still fail if someone else INSERTs a user with that name between your check and the addition! You might as well just try the addition and let it fail, since you have to protect it anyway.
|
|
|
|
|
0) Yes, you probably shouldn't need to be concerned about referential integrity if the user is selecting items from lists of existing items.
1) If there is a unique index, then I wouldn't check to see if the user exists, I find it wasteful because the INSERT will do this anyway.
2) I prefer to catch Exceptions and wrap them in more-meaningful Exceptions (ReferentialIntegrityException, TimeoutException, etc.) so the next higher level has more information.
3) If there is a user, you should alert him about the issue. If, not, then log it. This is handled by the highest level, not the lowest or anywhere in between. The DAL and BLL shouldn't need to know whether or not there is a log or a user. Consider that there could be multiple client applications using the same DAL and BLL -- some interactive, some batched, maybe a Web Service.
4) I also question the choice of having a Company field in the User -- I'd prefer to have a relationship table to associate them, but that's just me.
|
|
|
|