|
Colin Angus Mackay wrote: I'm going to defer to one of the industry experts on interoperability here (Christian Weyer) who advises against DataSets when dealing with non-.NET systems.
Unless your only aim in this discussion is to create the impression that you are right and everybody else is wrong, why would you quote someone to say the same thing I just said, pretending us to disagree on something we do not?
I already said I agree it's not ideal for non .net clients, although I maintain that it would not in fact be much of a challenge for a moderately skilled developer to read this xml and turn it into objects native to the client. But please don't quote just that bit and pretend I don't agree that the WDSL should let his tools do that for him and he shouldn't have to concern himself with XML, because I do agree. It's still valid to point out that it wouldn't be "hellish".
|
|
|
|
|
dojohansen wrote: They're just XML and if using Java means "a lot of code" is "going to have to be written at the Java end" to pick out nodes from an XML document then Java is pretty weak.
It's the fact that you HAVE to write the code to extract the data that's the problem. A DataSet (for example), is a Microsoft only construct. This means that you have to write code at the other end to navigate the structure, and extract the data - yes, you get this for free in .NET because the framework takes care of it for you, but having dealt with the Java world enough, I know that you'll just end up with people using other resources.
The larger amount of data refers to the fact that a DataSet isn't just data - it's the whole metamodel for a non-specific data structure. We use well defined business objects that just contain the information that we're interested in, and use WCF to make this available.
"WPF has many lovers. It's a veritable porn star!" - Josh Smith As Braveheart once said, "You can take our freedom but you'll never take our Hobnobs!" - Martin Hughes.
My blog | My articles | MoXAML PowerToys | Onyx
|
|
|
|
|
dojohansen wrote: data sets have some very nice features. They lend themselves to AJAX and web services rather well since their internal representation is XML
The internal representation of a DataSet is not XML. That would be increadibly inefficient.
dojohansen wrote: To implement things like the DataTable.Select() method or DataViews and filters, or relations, you need to start including metadata in the classes or use reflection to discover it, and you then lose the raw speed of primitive data objects with hardcoded relations
Using LINQ to Objects you don't have these problems, it is built in to the language and the type discovery is done at compile time so it won't slow your application during run time.
dojohansen wrote: How much of the functionality it offers is useful to me? How much might become useful down the road?
I've already discussed the uses cases where I think DataAdapeters/Sets/Tables work. They are few because it is more useful to me in the business model. I don't like data artifacts to exist outside the data layer because it means I can't swap a Database for a flat file, XML file, web service or any number of other things that don't natively talk in DataSets.
I like my applications to be clean and easily understood. DataSets have no business functionality. They cannot tell me the business rules regarding refunds, or how a certain discounts can be applied. Sure I can see that there is the data to support refunds or discounts but without the business rules it isn't nearly rich enough.
dojohansen wrote: the client makes an AJAX request the body of which is the edited XML document, and the server simply creates a dataset from the XML and uses DataAdapter.Update() to delete, insert, and update as required, detecting concurrency violations if any.
And where do the business rules fit in to all this? Maybe we are creating different types of application and you are creating an application that is an interface onto a database but in every application I've ever worked on there have been rules, validation, conditions that have to be met before the data is allowed anywhere near a database. There is more to persisting data than concurrency violations.
dojohansen wrote: Personally, I quite like the datasets.
I can tell. I'm beginning to wonder if perhaps you are the dogmatic one.
|
|
|
|
|
Colin Angus Mackay wrote: The internal representation of a DataSet is not XML. That would be increadibly inefficient.
I'll take your word for it. I've read serveral places that it is, for example here[^], which is supposed to be about interview questions for programmers. The relevant bit is point 3. which reads
3.For a dataset XML is the internal representation as well as medium used for output but for a recordset XML is merely an output format.
I am not really sure why you say it would be "incredibly inefficient" as trees aren't generally inefficient structures, at least not in terms of time. Spacewise they might be, although that really depends on whether there's a lot of nodes with little data or a few nodes with plenty - I guess most database data would lend itself towards the first case though.
Colin Angus Mackay wrote: Using LINQ to Objects you don't have these problems, it is built in to the language and the type discovery is done at compile time so it won't slow your application during run time.
LINQ may be better, but once again you're only proposing alternatives that work if you already have an entity layer. I think you are wrong to assume every application has a lot of business logic. Some servers for example do little more than expose (read only) data to the world.
Colin Angus Mackay wrote: And where do the business rules fit in to all this?
Oh come on! It's not like basing the business code on datasets is much harder than building it into your entity types - especially if they are generated and the darn tool can't make partial classes. Sure, the typed datasets have awful names but if I include a few aliases in my code file I can write code that is virtually identical to your beloved entity code to do validation.
You don't seem to acknowledge that if you don't use datasets for CRUD and serialization/deserialization you have to use somethign else. LINQ may be a good alternative for CRUD, I must admit I don't like it though, it's totally opaque and one never quite knows what's going on under the hoods, even if it does seem to do a pretty good job at least with LINQ to SQL. I haven't got the experience with it to really judge. But for serialization I really don't think the Serializable attribute is much use for properly designed entity objects, since it only works with public read-write members.
I'll cave in here and say that maybe datasets are the root of all evil, but I refuse to believe that I could do all the things I can do with them using entity objects and linq without a lot more code and a much higher degree of complexity.
|
|
|
|
|
By the way, I forgot to say that I tried to look at DataSet in reflector and though I didn't dig very deep it seems pretty clear that you're correct, and that site wrong: XML, it seems, is indeed not the internal representation. I just wanted to excuse myself by showing this misconception exists elsewhere. I do not however wish to continue spreading it!
|
|
|
|
|
And a final (I hope, this is fun but getting a bit long and I fear I might start annoying people by now, even if they don't have to read my scribblings) note.
Wikipedia, that sometimes authorative source of information about anything (it always makes me think of THHGTTG), has an article on ADO.NET. It says this about datasets:
A DataSet is populated from a database by a DataAdapter whose Connection and Command properties have been set. However, a DataSet can save its contents to XML (optionally with an XSD schema), or populate itself from XML, making it exceptionally useful for web services, distributed computing, and occasionally-connected applications.
|
|
|
|
|
dojohansen wrote: Personally I think data adapters are useful and the disconnected data model can be enough for many things.
Don't forget memory intensive, and a real no-no when it comes to interoperability. Speaking as somebody who spent a lot of time writing code that communicates with Java based systems, I can tell you that DataSets/etc, are just plain evil.
"WPF has many lovers. It's a veritable porn star!" - Josh Smith As Braveheart once said, "You can take our freedom but you'll never take our Hobnobs!" - Martin Hughes.
My blog | My articles | MoXAML PowerToys | Onyx
|
|
|
|
|
I think you mean "speaking as someone who considers himself quite the expert".
Pete O'Hanlon wrote: Don't forget memory intensive, and a real no-no when it comes to interoperability.
I believe you mean "scalability". If I am mistaken, I invite you to explain to us all how whether or not something is "memory intensive" (again, it's not very clear what you mean by this) affects whether or not it is interoperable. I for one have no idea. Exposed in a web service the dataset becomes an xml stream that the client may process as it comes in over the wire, save to a file and then process as a stream, or load into memory in full and then process in a random-access manner. I cannot imagine how the "memory intensity" of datasets can possibly affect their interoperability.
If you mean "takes up a large amount of memory" then datasets are "intensive" in the sense that you can't easily work with them as streams. But again, a drawback that represents a *potential* issue in *some* usage scenarios is presented in a totally dogmatic fashion as if it was a disqualifying feature of the technology. And again, it's not like entity objects solve this problem either. If you need to keep a dataset in memory there's some reason why that does not magically disappear just because you choose to represent the data in a different way. It may *help* with a more compact representation, but only a totally different approach like streaming would really *solve* such an issue.
So again, my challenge to you: If Datasets are so universally bad, describe a solution that is universally better. I assure you it will not be difficult to do like you did and just point out some potential problems that may exist in your solution and pretend that these automtically make it useless, even though the simple truth is that *ANY* solution has drawbacks and advantages compared to any other.
|
|
|
|
|
dojohansen wrote: I think you mean "speaking as someone who considers himself quite the expert".
Nope. And don't try to put words into my mouth. I'm speaking as somebody who writes software that has to interoperate with systems running on Java, or other none .NET systems, where DataSets just add complications and overheads.
dojohansen wrote: I believe you mean "scalability".
You're right. That's what comes of thinking three sentences ahead of what I'm typing.
Also - where did I say that DataSets are universally bad? At no stage did I state that - I did state that they were a no-no when it came to interoperable systems, and this is based on hard learned lessons, with paying clients. Please stop reading more into posts than were ever intended.
If I needed to use a DataSet internally, I would. I can't think of any instance where I've needed to off the top of my head, but that wouldn't stop me. In most cases, I prefer the flexibility of a plain old business entities, especially as we use them in conjunction with change notification and validation.
"WPF has many lovers. It's a veritable porn star!" - Josh Smith As Braveheart once said, "You can take our freedom but you'll never take our Hobnobs!" - Martin Hughes.
My blog | My articles | MoXAML PowerToys | Onyx
|
|
|
|
|
Option 2 always. I can't think of a good enough reason for option 1.
|
|
|
|
|
Option 2 is the way to go. No need to tie up system resources when it is not being used.
|
|
|
|
|
Yep, number two. But what that means depends on what the application does.
|
|
|
|
|
Hi,
the answer is that in ADO.NET you should always close the connection when it isn't in use. The connection classes manage the underlying TCP connection for you, so although you are "logically" closing the connection you are not in fact incurring the cost of tearing down and reestablishing the database connection each time.
In practice this makes the most difference in server apps where connection pooling is of great use. Whenever your code closes a connection it in fact simply releases that connection to the pool, and the next time an instance of a connection is constructed that uses the exact same connection string the pre-existing connection is returned. (There is something involved to make the connection state as if it was a freshly established one - see 'sp_reset' in SQL Server, not sure exactly how this is implemented with the other providers.)
But even in a desktop app where each client has a dedicated connection to the database and no pooling takes place it's considered good practice to open and close the connection. I think it is; it makes error handling a little easier. You still have to catch exceptions and perhaps log and present errors, but if the user wishes to retry an operation there's no additional logic to check the state of the connection or find out if it's necessary to open it first, because you simply *always* open it where you need it and close it when you're done with it.
I personally use a simple connection wrapper class to centralize the code implementing the patterns I wish to use. You may not bother doing this if you use code generation for most of your data access code, but if you hand-code this stuff it makes a huge difference - much less code, far fewer errors, and much easier debugging. And if you just put this class in a separate library and never put anything app-specific in it you'll start building reusable code that has applications everywhere.
For example, with my Connection class you can do very common tasks like these very easily:
Connection c = Connection.FromConfig("mainDB");
int count = c.ExecuteScalar<int>("select count(*) from [table] where [col] < @p0", value);
using (c.BeginTransaction())
{
c.ExecuteProc("update_stock");
c.ExecuteNonQuery("truncate table [pending_orders]");
someOtherObject.DoSomething(c);
c.CommitTransaction();
}
string[] popular = c.GetColumn<string>("select top 10 p.name from product p join sales s on s.productID = p.id order by s.itemsSold desc");
</string></int>
These are just a few examples. The feature shown with the generic ExecuteScalar<t> method is used basically everywhere strings are used to specify command texts. It's inspired by the string formatting feature, as in string.Format("{0} loves {1}", "Dag", "Jennifer") . Just like here "{0}" refers to the first argument in the args array, we adopted the naming convention "@p<index>" so that the first argument is the value for a parameter named "@p0", the second one named "@p1" and so on. It's simple, efficient, and encourages use of parameterized queries where lazy programmers might otherwise simply concatenate strings (which of course is very dangerous since it opens the door to SQL injection attacks).
I really think I should prepare a CodeProject article to present this logic. It's quite simple but I feel a lot of people might find it useful.
|
|
|
|
|
dojohansen wrote: The connection classes manage the underlying TCP connection for you
Assuming you are connecting to the SQL Server with a TCP/IP connection.
|
|
|
|
|
True. It's hard to comment on anything here with all the gotchas hiding in the bushes.
|
|
|
|
|
Always opening and closing the connection, without a helper object, can became really problematic.
I started to work in a project that counted in this technique, and here is the problem:
Method A - Opens a connection to do it's queries and then calls method B.
Method B - Opens a connection to do it's queries and then call method C.
Method C - Can you see the pattern?
I created some "helper" objects for this. For example, the application considers one database to be the Default database. So, I use:
using (var connection = new ThreadConnection())
{
}
This ThreadConnection is a wrapper class that:
1 - Checks if a connection for this given thread exists, and use it or opens a new connection if there is no connection opened.
2 - At dispose(), if it created the connection, then closes it, but if it didn't create the connection, does nothing.
This solution looks very similar, as method A, B and C will all create a "ThreadConnection" object, but only the outer method (A in this example) will create and dispose the connection.
But, if B is called from another method, not having an already created connection, then B will create and close it. Much better than having methods with overloads, so they create the connection or use the existing one (that was the "original solution" in the project I work now) and is absolutelly better than opening hundreds of connections as each method opens it's own connection.
|
|
|
|
|
It's a very good and important question for beginners. It is not wise to open the connection for the whole application's life time. It will allocate the memory resource and cost in performance. Closing and opening database connections may be bit tougher than the earlier but it drammatically enhance the performance and it is the best practice as well.
Either you love IT or leave IT...
|
|
|
|
|
|
when we login on a site, after login it shows a welocme page, if i copy and paste the welcome page url in another browser, page redirected to login apge. how does it happen, what coding is done in web.conf file. pls help me out...
|
|
|
|
|
deepak baldia wrote: when we login on a site, after login it shows a welocme page, if i copy and paste the welcome page url in another browser, page redirected to login apge. how does it happen
Your authentication ticket is stored in a cookie. If you open up another browser it doesn't have the cookie and it redirects you to the login page so you can generate an authentication ticket.
|
|
|
|
|
i have a question ;
how i can insert an image to a sql server data base?and how i canconfigure it to show information in datagrid??thinks
..Med
|
|
|
|
|
mohamedmrc wrote: i have a question ;
how i can insert an image to a sql server data base?
The butler is out, but you could try Google[^] or the examples [^]on this site in the meantime
I are troll
|
|
|
|
|
mohamedmrc wrote: thinks
Indeed, give that a try.
|
|
|
|
|
led mike wrote: mohamedmrc wrote:
thinks
Indeed, give that a try.
It'll never catch on.
"WPF has many lovers. It's a veritable porn star!" - Josh Smith As Braveheart once said, "You can take our freedom but you'll never take our Hobnobs!" - Martin Hughes.
My blog | My articles | MoXAML PowerToys | Onyx
|
|
|
|
|
If I have a quantity of unmanaged code (in C, designed for another system), and most of it could be safely terminated at any time but it makes callbacks into managed code which could not be so terminated, would it be safe to do something like:
volatile int statflag;
if (Exchange(&statflag, 1) == 0)
{
call_managed_code();
if (Exchange(&statflag, 0) != 1))
terminate();
}
else
terminate();
' In the other thread...
If Threading.Interlocked.Exchange(safeflag, 2) = 0 Then
TheThread.Terminate
If the portion of the code which is in C never does anything that could not be safely stopped asynchronously while safeflag is one, would there be any danger in calling Terminate() upon such a thread? Obviously any resources that had been allocated by that thread would have to be freed elsewhere, but in this particular application I'm not expecting the unmanaged code to use only resources that are given to it by other code (which could then take care of any necessary de-allocation).
In the particular application, there is a risk that the C portion of the code could hang without any callbacks to the managed portion, so I'd like to be able to use something more "potent" than Thread.Interrupt; is Thread.Terminate safe if used only upon a thread which is known not to hold any locks or be manipulating any other dangerous constructs?
|
|
|
|
|