|
Indeed, you will have a hell of a job coding a data-layer for 500 tables. And 500 tables isn't very rare these days.
WM.
What about weapons of mass-construction?
|
|
|
|
|
We use Informix in our shop so finding an ORM tool has been a challenge. I tried out a product by IdeaBlade and it has done a great job. It is a little tough though going from writing you own SQL statements to using reverse polish notation to generate a query. Does anyone know of another product that will work with Informix? I keep begging the MyGenration guys to add Informix but I think they are too busy with other DB's but one can always hope.
www.ideablade.com
The most important thing in communication is to hear what isn't being said. - Peter Drucker
|
|
|
|
|
I expect MyGeneration 2 will be handle many more databases. Especially since users will be able to add support for them. It will also be free.
|
|
|
|
|
Yes, the MyMeta II provider model is already worked out. Finally, users will be able to easily create providers to add any number of new databases to the MyGeneration arsenal.
|
|
|
|
|
as i don't code for .net, i cannot say which .NET data-tier generator i use.
however, at work, we code in java, and our generator is Hibernate too...
|
|
|
|
|
So I voted "none". Because I don't believe in ORM, and closely coupling the data objects to the persistence layer. I actually PREFER a non-strongly typed abstraction layer between the UI and the persistence layer. I prefer a more complex but more generic, re-usable persistence layer that decouples how I'm persisting the data from what I'm persisting. So that I can save to xml, or comma delimited file, or a relational DB. I prefer, again, a more complex but more re-usable persistence service that auto-generates the SQL for me (yeah, with a backdoor for when I truly need to tell it "here's the SQL").
Yes, there's a lot of pain up front in writing, testing, and documenting the engine, but I have to tell you, once you get it working, it works so smooooth for many different kinds of applications and scenarios. No hard-coded objects, no recompile-the-app when the schema changes, heck, I make changes to the schema on the server-side while it's running, wire up the UI element-property, and tell the client-side UI to refresh, and voila, data is loaded, persisted, etc.
Marc
Pensieve
Some people believe what the bible says. Literally. At least [with Wikipedia] you have the chance to correct the wiki -- Jörgen Sigvardsson
|
|
|
|
|
In the world of enterprise applications, the advantages of a strongly typed data-layer blow away the advantages of a runtime system as you described. When a database schema changes, thousands of lines of code can become invalid without you ever knowing it because the compiler doesn't catch runtime errors. Once you've used any strongly typed framework along with a code generator in a large application, you will never go back. Most of the ORM systems I've used provide backdoor SQL queries when needed, serializing and deserializing data to XML, etc. Runtime magic and reflection are really nifty tools, but they end up causing headaches with debugging. In the scripting world things are a lot different because all errors are runtime. Maybe you come from a scripting background or something. I say, thank God for compilers or else we'd all be scanning logfiles for hours and hours to debug our applications, and thank god for ORM and Code Generation because it keeps us focused on the code that matters instead of the boring busy work of coding a custom DAL for every project.
Justin Greenwood
MyGeneration Software
-- modified at 0:00 Wednesday 26th April, 2006
|
|
|
|
|
JustinGreenwood wrote: When a database schema changes, thousands of lines of code can become invalid without you ever knowing it because the compiler doesn't catch runtime errors.
Um, no, that's not what happens. Rather, that's what happens with a strongly typed system, because now the strong types no longer match the schema type, and you have rebuild the model. And heaven help you if the code that interfaces to the ORM layer is expecting a particular type. There's your thousands of lines of code that you have to change. In a "loose-type" system, neither the server nor the client particularly care what the schema data type is, except for specialized cases. It's all handled by data binding, type converters and formatters.
JustinGreenwood wrote: you will never go back.
Tried it, and did go back.
JustinGreenwood wrote: Runtime magic and reflection are really nifty tools
No need for any magic. Well, granted, data binding uses reflection (although, in .NET 2.0, they've changed the architecture so reflection isn't necessary).
JustinGreenwood wrote: In the scripting world things are a lot different because all errors are runtime.
As an aside, you can detect those errors through unit tests and parser tests, at compile time. But you can totally avoid scripting by reading the database schema directly. I personally prefer to define the DB schema in XML and then generate the SQL to create the DB, since I can deal with any quirks that different DB's might have. But you don't have to do it that way. Regardless, nothing else needs to be scripted.
JustinGreenwood wrote: Maybe you come from a scripting background or something.
True enough. But not necessarily relevant in this case. Whether you use scripting or not is more an implementation detail, not an architectural requirement.
JustinGreenwood wrote: instead of the boring busy work of coding a custom DAL for every project.
But that's the whole point. Once coded, I've never had to go back and code it again for another project. There is no custom DAL for every project. And unlike ORM, there is no custom auto-generated DAL either.
Well, if you ever want a demo, let me know, and we can discuss the pros and cons (which of course, there are, just like anything else) to the tools I'm using.
Marc
Pensieve
Some people believe what the bible says. Literally. At least [with Wikipedia] you have the chance to correct the wiki -- Jörgen Sigvardsson
|
|
|
|
|
Marc Clifton wrote: Um, no, that's not what happens. Rather, that's what happens with a strongly typed system, because now the strong types no longer match the schema type, and you have rebuild the model.
You make it sound like rebuilding the model is a painful thing. It's a matter of a single click, or for many like myself, it's integrated into the build process.
It's extremely reliable and protects you with Compile-time errors. Sure, there are many lines of code to change, but you know where they are right away after you regenerate your code. There is no way a runtime system can do that. I think you underestimate the value of compile time errors. They are usually very clear and easy to fix. Runtime errors are typically elusive, especially in a non-debug build. Another major advantage is intellisense. Ever heard of it? I love it when I see:
Table["Employexe"].Column["N1ame"] = "smith";
Intellisense protects you from stupid errors like that. It also speeds up the development process because you don't have to memorize the model. If you don't see the advantage of that, I give up on you right now.
Here's an example: Lets say you go in and remove a foriegn key field and add a required field. Now run your application. What happens? It still runs! YAY.. Wait a minute, now I'm getting runtime errors all over the place. When I try to save my form it crashes saying it's missing required fields. Crap, and when I try to view my child record form it can't find the related record.
The thing is you have no idea where all your errors are when you don't have a strongly typed model. All the type converters and databinding in the world can't fix that. they just make you feel good when you see 0 compile errors. Sure, you could build a huge test suite alongside your application to test every little thing, but then you have even more code to maintain and another step to go through every time you build. (time killer) In the read world, building a huge test suite is completely unrealistic.
As far as tying your hands behind your back, that's rediculous. There is always the option of doing things outside the model for improved performance. There's no reason you can't open up a reader and loop through it, or call a stored proc directly when needed.
-- modified at 9:48 Wednesday 26th April, 2006
|
|
|
|
|
JustinGreenwood wrote: You make it sound like rebuilding the model is a painful thing.
Certainly rebuilding isn't painful, but redistribution of the client and server apps can be. Obviously, in both approaches, something has to be redistributed. I would rather prefer an xml file of the schema than a lot of DLL's and EXE's.
JustinGreenwood wrote: Another major advantage is intellisense.
That's a strong argument, actually, but it only applies to when you need to write business rules, at least in my architecture. And I agree, I cringe when I see that. So actually, there is a place for some ORM. But a heck of a lot of transactions can be accomplished without any business rules, where you don't need the ORM generated classes.
JustinGreenwood wrote: If you don't see the advantage of that, I give up on you right now.
LOL! Yeah, I see the advantage of that.
JustinGreenwood wrote: Wait a minute, now I'm getting runtime errors all over the place.
You're making an interesting assumption. Why would I get runtime errors? Why wouldn't the server generate the correct SQL, and why wouldn't the client-side automatically wire up an "entry required" validation routine to the required field? (Or, if you prefer, the server sends back to the client a "required field" error.) It's called automation.
JustinGreenwood wrote: When I try to save my form it crashes saying it's missing required fields.
Nope. Automatic validation takes care of that. It's a smart system after all, requiring that you don't have to tell it every stupid little thing.
JustinGreenwood wrote: Crap, and when I try to view my child record form it can't find the related record.
Why should it? You removed the FK. How would ORM handle that? With a compiler error, because the dependent child class is no longer a member of the primary class, and some other code outside of the ORM generated code is expecting that relationship? OK, that's one way to look at it. Another way is, if I change my schema and I have a tightly coupled ORM-application code architecture, then I have to go back through and fix all my broken code. If instead I just change the references to virtualized view of the tables and fields that the child depends on, then guess what, the UI still works, the server generates the right SQL, the business rules still work (because they're coupled to the virtualized views, not direct tables and fields) and life goes on, without changing code.
But that's a really good example. I'm going to have to use it as an example of the advantages of my architecture.
JustinGreenwood wrote: In the read world, building a huge test suite is completely unrealistic.
I tend to disagree. It's a requirement, IMHO, regardless of what architecture you use. The more (meaningful) tests, the better.
Marc
Pensieve
Some people believe what the bible says. Literally. At least [with Wikipedia] you have the chance to correct the wiki -- Jörgen Sigvardsson
|
|
|
|
|
Marc,
I tend to get a little fired up talking about this stuff. I believe strongly in code generation and a strongly typed model, and I think I have some good arguments to back up my opinions. But at the end of the day, if you can get the job done quickly and effectively in your own way, then more power to you. I always start a project trying to make it a work of art, but in the end, I just want to get the thing done and working on time. I enjoy these debates emmensely because it keeps my mind open to other ways of doing things. There are definitely some advantages of a runtime system, especially for smaller projects. If I was doing a small app and I just wanted to get the damned thing working as fast as possible, there are times would I would prefer to use such a system. However, in the kind of application I've been doing at work, I'm dealing with 100s to 1000s of tables, and I couldn't survive with a system like the one you're using. I guess there's never one perfect solution for every project that uses a database. There are many options and you have to use the one that makes sense.
|
|
|
|
|
JustinGreenwood wrote: I tend to get a little fired up talking about this stuff.
Me too!
JustinGreenwood wrote: I have some good arguments to back up my opinions.
Yes you do.
JustinGreenwood wrote: I enjoy these debates emmensely because it keeps my mind open to other ways of doing things.
As do I, but I keep having problems wrapping my head around the advantages of ORM--barring the intellisense (a definite advantage) and possibly the strong typing in certain situations that you pointed out.
JustinGreenwood wrote: I couldn't survive with a system like the one you're using.
hehe, Interesting. Well, like I said, if you ever want a demo, I've got a gotomypc account (if I can remember where I wrote down the passwords!) and I'd be happy to give you a demo (probably overly simplistic, but what the heck).
Marc
Pensieve
Some people believe what the bible says. Literally. At least [with Wikipedia] you have the chance to correct the wiki -- Jörgen Sigvardsson
|
|
|
|
|
Marc,
I would be interested in a demo. I wrote the article http://www.codeproject.com/aspnet/NHibernateBestPractices.asp[^], and am a bit biased towards NHibernate, but I would love to see a different approach. Correct me if I'm wrong, but it sounds like your approach is more inline with the Ruby on Rails approach of "convention over configuration."
IMO, I think we* have a lot to learn from the Rails approach and would like to see a port of the Rails approach to .NET. (I know Castle MonoRail exists, but it's still not nearly as automated as Ruby on Rails.)
Billy @ http://www.geekswithblogs.com/billy
* "We" includes all developers who write SQL code, stored procs, ADO.NET, any kind of thick data-tier, and even NHibernate mapper writers!
|
|
|
|
|
>Well, if you ever want a demo, let me know, and we can discuss the pros and cons (which of course, >there are, just like anything else) to the tools I'm using.
Can you post a demo somewhere? or if you dont mind send me a copy of it joao@araujofamily.org.
I am really interested in the way you do it. Though, I have to agree with
Justin, I work with 100 tables databases, which are complicated to work with
the way you do. I also feel confortable with ORM(Hibernate, NHibernate).
Anyway, I think it is of some help looking into yours approach.
Thanks,
Joao Araujo,
Joao Araujo
|
|
|
|
|
JustinGreenwood wrote: Once you've used any strongly typed framework along with a code generator in a large application, you will never go back.
Maybe it is "Once you've used strongly typed ... you can't ever go back". A strongly typed system tends to tie your hands and restrict your ability to change things, after a while you get used to it and think your hands were born that way (tied).
There are so many times I saw suggestions to make changes to some badly designed strongly typed systems, the answers are typically "we can't do it".
My articles and software tools
|
|
|
|
|
Yep reminds me of when I went for a job interview once and asked about their software reuse. The manager replied that they had a terrific reuse program -- all the parameters where passed in as void pointers and the results were returned as void pointers.. That way they could pass in anything and turn the results into anything they wanted -- 100% reuse -- He was so proud of his team..
I snatched my resume from his hands turned on my heel and left his office as quickly as I could.. I could almost hear the flying monkey theme from the Wizard of Oz as left the building...
All science is either physics or stamp collecting.
|
|
|
|
|
ReleaseTheHounds wrote: all the parameters where passed in as void pointers and the results were returned as void pointers..
Interesting. Like passing objects everywhere in C#, eh?
I guess one of the main arguments of ORM is the strong typing, which puts one right into the argument as to whether weak or strong typing is better. And there's no easy answers.
Marc
Pensieve
Some people believe what the bible says. Literally. At least [with Wikipedia] you have the chance to correct the wiki -- Jörgen Sigvardsson
|
|
|
|
|
Did it look like this?
abstract public class GenericDAL
{
public abstract object[] Execute(string actionName, params object[] parameters);
}
|
|
|
|
|
Xiangyang Liu wrote: Once you've used strongly typed ... you can't ever go back
Xiangyang Liu wrote: the answers are typically "we can't do it".
That always amazes me, when a system becomes so inflexible that the programmers will say that. Although, it is easy to create an architecture that needs to be basically tossed out and redesigned to meet some new requirements, but that's primarily because shortcuts were taken to the OOD. Which I've done myself numerous times.
Marc
Pensieve
Some people believe what the bible says. Literally. At least [with Wikipedia] you have the chance to correct the wiki -- Jörgen Sigvardsson
|
|
|
|
|
Marc, let me throw this into the fire. EntitySpaces, a new architecture I've created with others through EntitySpaces LLC generates SQL on the fly, and it gives it back to you so you can see it if you so desire. You can also write an EntitySpaces project for Oracle and run the EXACT same code on Access, Oracle or MySQL without recompiling, it's merely a connection string change. However, it is all strongly typed, but very lightweight, and generateable. We use inheritance to avoid regeneration errors, your custom classes inherit from the generated classes, you never hand edit the base (generated) classes, you just override any methods or properties in your custom class. Thus, when you look at your custom code classes they are either empty (because the generated class does it all) or they only contain your real business logic which makes it very nice indeed.
The nice thing about this approach is there are never runtime errors. The code compiles 100% clean and we never use hard code fields names, "EmployeeID" rather we use Employees.Columns.EmployeeID. You can write a query like this, via intellisense, and it runs on any database:
public partial class EmployeesQuery: esEmployeesQuery
{
public bool CustomLoad()
{
Select(EmployeeID, FirstName, LastName, TitleOfCourtesy)
.Where
(
Or(LastName.Like("%A%"), LastName.Like("%O%")),
this.BirthDate.Between("1/1/1940", "1/1/2006")
)
.OrderBy(LastName.Descending, FirstName.Ascending);
return this.Load();
}
}
It's amazing how productive we are with this approach, so easy a child could do it, and it's generated in seconds, and can be regenerated in seconds if/when the schema changes.
Employees entity = new Employees();
entity.AddNew();
entity.FirstName = this.txtFirstName.Text;
entity.LastName = this.txtLastName.Text;
entity.Save();
Look at that code above, spoon fed via intellisense, it functions with or without stored procs, you get to choose, and runs on any provider, strongly typed, serializable and so forth ...
Mike Griffin
EntitySpaces LLC
http://www.entityspaces.net
-- modified at 10:31 Wednesday 26th April, 2006
|
|
|
|
|
You can also write an EntitySpaces project for Oracle and run the EXACT same code on Access, Oracle or MySQL without recompiling, it's merely a connection string change.
Mike, I hope you meant that if you wrote a project for Oracle, you have to keep your db exactly the same on mysql, access, sqlserver etc.
It's hardly ever just changing the connection string. If I have a project on SqlServer, with GUID's, bitfields and the like, no way I'll be able to use that same project on Oracle without converting data somewhere: Oracle doesn't have a GUID type nor a bittype.
We use type converters behind the scenes so you can keep your entity code like:
myCar.IsCabrio = true;
and be able to save that myCar in Oracle, or whatever DB, though it'll never be just a switch of a connection string for the developer. At runtime, sure, just switch between adapter instances, per call, and you can access whatever db you like, but during development, it's not possible.
Or the DB has to obey strict rules like only use types from this small list etc. and that's too restrictive.
--
Lead developer LLBLGen Pro: http://www.llblgen.com
Only the true wise understand the difference between knowledge and wisdom.
|
|
|
|
|
Actually, we have a huge NUnit Test suite that runs against EntitySpaces available for download to all registered users. We run a huge battery against Oracle, Access, Microsoft SQL, and MySQL using the same exact binary code, the only difference being the EntitySpaces connection string entry in the config file, which also indicates whether you want to use stored procedures or dynamic sql, and whether you want ADO.NET command based transactions or enterprise distributed transactions, all in all it's like 30 some odd configurations all driven by the config file.
You are correct that you have to use a subset of all available datatypes of course, however, it's very forgiving, we use a 'bit' for boolean in SQL and in Oracle it's represented by a 'number' and it all works fine. EntitySpaces does an excellent job with provider independence actually.
See http://www.entityspaces.net/portal/Documentation/TestSuiteGettingStarted/tabid/99/Default.aspx[^]
Maybe you can study our techniques and improve LLBLGen if you're having trouble with provider independence?
|
|
|
|
|
Though are these conversions hardcoded? I mean, if I have an entity class with a boolean field Foo, and I want to save it into oracle, I have to have a field NUMBER(1,0) there? What if I want to map it onto CHAR(1) with Y and N ?
I also just switch a connection string to test a set of tests against oracle or sqlserver or other database, though don't have to limit myself to a subset of types nor does the db schema have to obey restrictions, while your schemas IMHO have to obey restrictions as in: it won't work with every legacy db / existing db out there.
--
Only the true wise understand the difference between knowledge and wisdom.
|
|
|
|
|
No, the mappings are dynamic, we use a rather interesting feature of ADO.NET to accompish it, it's a very powerful approach actually, it will darn near map anything to anything ...
|
|
|
|
|
I must be missing a part of the ADO.NET manual then, because I don't recall a feature which can map any type onto any other type for data persistence/retrieval.
Also, if it's true what you're saying, why do you still need a subset of the types?
--
Lead developer of LLBLGen Pro: http://www.llblgen.com
Only the true wise understand the difference between knowledge and wisdom.
|
|
|
|
|