|
Harry Sun wrote: NHibernate.ADOException: could not insert: [ConsoleApplication1.user][SQL: INSERT INTO user (UNAME, NID) VALUES (?, ?)]
Those question-marks are parameter placeholders. You are using generator=Native... is your UID column set to Identity?
Otherwise check the InnerException property.
|
|
|
|
|
Thank you for your reply. I know I did something terrible, just because was not for sure which catalogue this question should be in. I will never do that again. Thanks again. Please forgive me.
|
|
|
|
|
A user add different kinds of shapes on a panel depending on where he clicked on the panel and a shape is display. I am calling addeShape() function when a user click on the panel. Everything is ok up to this point.
But now lets say a shaped is selected and now a user want to added another shape on the top of the existing shape the user has to add shape first on the panel by clicking on the panel, the shape appears and then the user has to drag on the location he wants it to be.
What i want to do is even if an existing shape is selected and a new shape is begin added on the top of the existing shape I want to be able to add on the top of it. Rather than clicking first on the panel and then drag on the desired location.
How can i solve this puzzle. Any input will be highly appreciated.
Thansk
|
|
|
|
|
Hi,
I'm working on my second project at the moment that makes use of a 3-layer model. On my first project I had circular dependencies so that the data layer could hydrate and return a business object and take one as a parameter to save it back to the database. This worked fairly well although it was rather complicated.
This time round I'm creating the data layer so that it takes individual values of a business object - strings, integers, etc - in order to save the objects, and it similarly returns the individual values through reference parameters to allow the business layer to hydrate objects.
The second method seems to be making everything a lot simpler, however I've run into a little problem. With the first method when I wanted to get a collection of business objects the data layer would just create a collection and add each object into that before returning it. With the less coupled second method I cannot do this and must rely on the business layer to create a collection based on the values returned by the database. The problem I'm facing is that, without creating extra classes in the data layer just to store records from the database, I can't see a way of getting the data back to the business layer.
I've read as much material on data layers as I can find, but none of it seems to cover this sort of thing. Has anyone got any ideas how to overcome this without creating extra classes which mimic those in the business layer?
Regards,
Matt
|
|
|
|
|
CaptainMatt wrote: The second method seems to be making everything a lot simpler, however I've run into a little problem.
Have you not recognized the seeming contradiction in your statement? Systems will have complexity if they do anything much at all, period. There is no getting away from it. You might find Grady Booch's Turing lecture[^] from last year interesting.
All you have done is abandon an object oriented design for a procedural design. With that will come all the well known problems of a procedural solution. Your previous experience with complexity may seem like a cake walk by the time you are done.
led mike
|
|
|
|
|
Thanks for the reply,
I realise that a completely object oriented design is most probably the correct way to go. However, half of the material I've read on creating data layers recommend using the procedural approach. I thought it might be a good idea to try it out and see how it goes, apparently not too well though.
Since my first message I've made a start on re-writing the layer using objects. At the same time though I'm still interested in how the people who have taken this route have done this sort of thing or if extra data-only objects are used.
Another reason my previous object oriented design was rather difficult to work with was that I initially followed an article that created persistence objects for every business object. A single persistence/database/storage object to persist the whole system makes things easier.
Regards,
Matt
|
|
|
|
|
While my experience in developing apps that have Datalayers is extensive, I have never extensively studied the problem domain. What I have picked up from direct experience and random discussions and articles is that the work and/or complexities associated with this domain have not been eliminated.
led mike
|
|
|
|
|
The common way people end up writing their DAL seems to be creating a procedural CRUD interface, which hits stored procedures in the DB ("for security" / "for performance"). This is generally done because people aren't leveraging the metadata features of the language.
Also people like to use the Enterprise Data Access Block. So much so that they use it for general data access - when its really just a database abstraction kit - when they have no plans to ever switch database vendors. That ends up locking them out of a whole heap of cool tools :P
In .Net at least, its possible to make a generic persistance layer that can operate on any business object which is correctly tagged up with persistance attributes. With a smattering of generics you can have a single persistence layer that isn't aware of your business objects.
The way we do this is using reflection to read attributes that map the object model to the database schema. Reflection does have a small performance overhead - but you can make up for that by flexible queries - for example joining on referenced records in one round-trip. It also lets you provide generic FindByX routines, etc.
So, yes, it is possible - and its a lot better than having to keep a whole bunch of structures that are only used for passing data between tiers maintained
|
|
|
|
|
The problem with a completely object-oriented approach is that it doesn't work so well for a server scenario. Getting a bunch of data and instantiating a huge collection of instances is sort of useless if all you'll do with those instances is to serialize them and send off the data as, say, XML to a client somewhere, then discard all the objects. Likewise, deserializing a stream of (updated) data to a collection of objects and then use reflection or some other run-time schema mapping is rather useless.
In a desktop scenario it sure is nice. Bind the grid to your objects and when the user changes something, modify the instances you've already loaded. When he saves, call Save(). It's great. But it doesn't work like that on a server, because you can't keep all those objects alive between the user reading the data and wanting to save.
Which is why there is so much talk about "service" orientation. They say its "stateless objects" but without state it isn't really an object at all! Object-orientation is all about encapsulation. Service-orientation sacrifices that, because there's hardly any point protecting the state of an object that is used as little more than a serialization/deserializtion mechanism for SQL Server data...
|
|
|
|
|
Hi,
I'm not quite sure I get what the problem is, but I'm attempting to give input based on the assumption that the problem is duplication of the same logic in the data access and busienss logic code. If so, a possible solution might be to derive the business classes from the data access classes. That is, make your data access classes so they contain all data storage for the type - as protected fields - and the knowledge of how to read/write those fields to and from persistent storage. Then derive business classes from these and publish those parts of the data that should be possible to directly manipulate from the outside using properties, adding validation logic as appropriate. (Some validation might be more appropriate in the DAL; there is a grey zone here. "Name cannot be more than 50 characters" might be considered data access logic or business logic; at first glance it only depends on the data store and as such could be considered DAL logic, but then again it might actually affect the layout of reports and what have you. In my view, it is simpler to keep all validation in one place, and if so it is undoubtedly the business layer that should do it.)
|
|
|
|
|
Hi gurus, I'm creating a web application for the internet with scalability in mind. In my application, I'll have forums used by different groups of people (each group will have their own forums). I will start from 1000 or 2000 groups with the potential to grow until 5000 groups (it's guaranteed that I won't exceed that number of groups, 5000, based on the nature of my application). I was thinking that having all the forum posts in one table will cause many problems like having a very large index and slow search among other problems (as I want to have many indexes on the table like the PostID, ForumID, GroupID and PostDate), so, I was thinking that it might be better to have a separate table for each group's forum posts in order to have small indexes so that I have faster searches and inserts take less time (esp. that some groups are expected to have a large number of posts per day) and also to be easier to move the tables to other database servers in case the application grows and so the web farm. Now I'm really confused how to design my DAL.
Assuming that the structure of the forum posts table is like that (this is just a simplified structure not the real one):
CREATE TABLE ForumPosts_x
(
PostID INT IDENTITY(1,1) PRIMARY KEY,
ForumID INT NOT NULL, -- which refers to the ForumID in another table named Forums which includes all forums for all groups
ParentPostID INT NULL,
PostSubject NVARCHAR(200) NOT NULL,
PostText NVARCHAR(5000) NOT NULL,
PostDate DATETIME NOT NULL DEFAULT GETUTCDATE()
)
Note that there's no group ID as x in the table name will be the group id e.g. ForumPosts_19 for group id 19
Now as to designing my DAL, should I:
1. Create stored procedures with dynamic SQL and pass the group id to the procedure i.e. use EXEC and sp_executesql (there's an interesting article on the subject here: http://www.sommarskog.se/dynamic_sql.html)
For example:
CREATE PROCEDURE InsertForumPost
@GroupID INT,
@ForumID INT,
@ParentPostID INT,
@PostSubject NVARCHAR(200),
@PostText NVARCHAR(5000)
AS
DECLARE @tablename NVARCHAR(50), @sql NVARCHAR(4000)
SET @tablename = N'ForumPosts_' + @GroupID
SET @sql = N'INSERT INTO dbo.' + quotename(@tblname) +
' (ForumID, ParentPostID, PostSubject, PostText) VALUES (' +
'@ForumID, @ParentPostID, @PostSubject, @PostText)'
EXEC sp_executesql @sql, N'@ForumID INT, @ParentPostID INT, @PostSubject NVARCHAR(200), @PostText NVARCHAR(5000)', @ForumID, @ParentPostID, @PostSubject, @PostText
2. Create the procedures with static SQL for each group. i.e. each group has its own set of procedures which would make us have a large number of procedures
For exmaple, the procedure for inserting a new forum post for GroupID #19 would be:
CREATE PROCEDURE InsertForumPost_19
@GroupID INT,
@ForumID INT,
@ParentPostID INT,
@PostSubject NVARCHAR(200),
@PostText NVARCHAR(5000)
AS
INSERT INTO dbo.ForumPosts_19
(ForumID, ParentPostID, PostSubject, PostText)
VALUES
(@ForumID, @ParentPostID, @PostSubject, @PostText)
3. Use SQL text directly in my code, C# in my case (which I'm highly considering but a little concerned about how to execute multiple SQL statements - as you have in stored procedures - without having to call ExecuteNonQuery() multiple times, which I believe could affect performance)
4. Drop the whole thing and stick to using one table for all the groups
What would you do if you were designing such application? Any suggestions are highly appreciated...
|
|
|
|
|
Trust your database server. In general having loads of indexes will slow your inserts, not your selects. You won't need to do any partitioning unless you have a _lot_ of data.
You can send multiple queries and get multiple responses in one round trip.
The stored procedures aren't giving you any benefit here - its just creating noise.
If I was designing this app i'd probably go with a Thread table inheriting Post, and giving Post a reference to the parent Thread (and possibly also the Post that was replied to - if you wanted to track that). Add your Thread table referencing Group as well for your groups. That lets you pull whole threads out off one index in one select, and Threads for each Group. Then I'd use Diamond Binding to handle my DAL...
|
|
|
|
|
Mark Churchill wrote: Then I'd use Diamond Binding to handle my DAL
ROTFLMAO You so had me until that part
led mike
|
|
|
|
|
Well *I* don't have to weigh up cost/benefit because I can click a button and get a license
|
|
|
|
|
Hi Mark, thanks a lot for your help, but would you mind explaining in more details?
Mark Churchill wrote: You can send multiple queries and get multiple responses in one round trip.
Well, actually I doubt this could be of much benefit in my case as I'm creating a web application so my queries will always be based on user actions so there's no way to send multiple queries at the same time in my case.
Mark Churchill wrote: The stored procedures aren't giving you any benefit here - its just creating noise.
Could you explain more please? I understand that you want me to go with one table so why not use sprocs in that case? We won't have the problem of caching query plans for every table as we're going to use only one table.
Mark Churchill wrote: If I was designing this app i'd probably go with a Thread table inheriting Post, and giving Post a reference to the parent Thread (and possibly also the Post that was replied to - if you wanted to track that). Add your Thread table referencing Group as well for your groups. That lets you pull whole threads out off one index in one select, and Threads for each Group. Then I'd use Diamond Binding to handle my DAL...
I'm a little lost here, what exactly do you mean by inheritance here? I was going to use a ParentPostID INT NULL field in the ForumPosts table (see the CREATE TABLE section in my original post) which will refer to the thread, is that what you mean? I was also going to index that field so that I can find threads and replies fast if this what you mean by pulling out the threads.
So I was exactly going to index the PostID, GroupID, ParentPostID and PostDate fields (this is why I thought about separating the data into tables, one table for each group, as I thought I would have too many indexes, 4 indexes as you can see, and this could affect the inserts performance tremendously)
By the way, if one table will be used, the PostID will be by group not an identity field. I was only going to use identity fields if had a table for each group but not with a single table (this is done for scalability sake, to be easy to move data to other databases or even tables with the same structure in the same database), I'll have another table with the last id used for every group e.g.
CREATE TABLE LastUsedID
(
GroupID INT NOT NULL,
LastUsedID INT NOT NULL
)
Thanks again for all your help, waiting for your reply...
|
|
|
|
|
I wrote: You can send multiple queries and get multiple responses in one round trip.
This was in reference to "...which I'm highly considering but a little concerned about how to execute multiple SQL statements..." - this isn't anything to worry about. A query isn't restricted to multiple statements - so you can send "select foo; select baz" in one ExecuteDataSet() and get back a DataSet containing multiple DataTables.
I wrote: The stored procedures aren't giving you any benefit here - its just creating noise.
The stored procedures were just performing basic CRUD operations. SQL server will cache the execution plan for your ad-hoc queries anyway. For a simplistic view, stored procedures are for providing abstraction/code reuse rather than performance (some would also say they help with security).
Waleed Eissa wrote: I'm a little lost here, what exactly do you mean by inheritance here?
A thread is basically a post that also has some extra information, like a title, a group it belongs in, etc. Say your database structure has a table, Product (Id, Description) and ServiceProduct(Id, CostPerHour) with ServiceProduct.Id being a foreign key to Product.Id. This defines that for every set of ServiceProduct data there is Product data, meaning ServiceProduct inherits Product, which is pretty analogous to how inheritance relationships work in code. This can be handy.
Using a ParentPostId field makes it difficult to pull a whole thread out the database. Given a parent post I would have to do an index scan to get the 2nd post, then again to get the 3rd, etc.
Say you have this setup:
Post (Id, Author, BodyText, TimePosted, ParentThreadId) and Thread (Id, Title, GroupId)
Thread.Id is a fk to Post.Id (inheritance)
Post.ParentThreadId is a fk to Thread.Id (reference)
This means that I can easily select threads in a group (Thread by GroupId), Posts in a Thread (Post by ParentThread).
If you are feeling uncomfortable with the inheritance relationship, then you could just have a Thread table that acts as a bit of a stub to group posts.
It might be worth having a look at how forums like phpbb handle their database structure (considering I'm coming up with this on the fly).
I'm not comfortable with the LastUsedId. It seems incredibly unlikely you would approach the 4 billion odd posts that just an int would provide. SQL Server could handle that kind of indexing using the processor in my phone - you'd be out of disk before you ran out of primary keys - and if you need to partition, just move everything out by GroupId - having holes in your index isn't an issue.
Insert performance isn't an issue for you - your users are reading and searching much more than they are posting. The more indexes the better - every millisecond you spend updating an index is going to save you a hundred of net lookup time
|
|
|
|
|
Mark Churchill wrote: I wrote:
You can send multiple queries and get multiple responses in one round trip.
This was in reference to "...which I'm highly considering but a little concerned about how to execute multiple SQL statements..." - this isn't anything to worry about. A query isn't restricted to multiple statements - so you can send "select foo; select baz" in one ExecuteDataSet() and get back a DataSet containing multiple DataTables.
Well, I'm sorry, I probably should've explained it more clearly, actually what I meant here is that when you use stored procedures you can easily use many sql statements in the same procedure
for example:
begin transaction
insert into foo
update foo2 set ..
.. etc
This is very easy with stored procedures but I guess not so easy with ad-hoc sql statements, this is what I meant to say
Mark Churchill wrote: A thread is basically a post that also has some extra information, like a title, a group it belongs in, etc. Say your database structure has a table, Product (Id, Description) and ServiceProduct(Id, CostPerHour) with ServiceProduct.Id being a foreign key to Product.Id. This defines that for every set of ServiceProduct data there is Product data, meaning ServiceProduct inherits Product, which is pretty analogous to how inheritance relationships work in code. This can be handy.
Using a ParentPostId field makes it difficult to pull a whole thread out the database. Given a parent post I would have to do an index scan to get the 2nd post, then again to get the 3rd, etc.
Say you have this setup:
Post (Id, Author, BodyText, TimePosted, ParentThreadId) and Thread (Id, Title, GroupId)
Thread.Id is a fk to Post.Id (inheritance)
Post.ParentThreadId is a fk to Thread.Id (reference)
This means that I can easily select threads in a group (Thread by GroupId), Posts in a Thread (Post by ParentThread).
If you are feeling uncomfortable with the inheritance relationship, then you could just have a Thread table that acts as a bit of a stub to group posts.
It might be worth having a look at how forums like phpbb handle their database structure (considering I'm coming up with this on the fly).
I like your idea about having a separate table for threads, I think this can speed things up as we can have less indexes on the same table, it might just be harder to maintain though as you have the data in two tables but I still like the idea.
Mark Churchill wrote: I'm not comfortable with the LastUsedId. It seems incredibly unlikely you would approach the 4 billion odd posts that just an int would provide. SQL Server could handle that kind of indexing using the processor in my phone - you'd be out of disk before you ran out of primary keys - and if you need to partition, just move everything out by GroupId - having holes in your index isn't an issue.
Actually this wasn't meant for avoiding reaching the maximum limit for int as it's large enough (by the way the max is 2 not 4 billions as int is signed), it's intended for scalability to make it easier to move data to different databases (assuming a group has way too many posts and it's making the table too large so you move the data of this group into a separate database), so that you don't have to worry about the correct value of the seed for the identity field.
Thanks for all your help...
|
|
|
|
|
If you are using SQL Server 2005 have a look at partitioned tables, partitioning by group and date - this gives very good performance.These are single tables, but physically split on the columns you define, allowing the underlying files to be located on different physical disks.
If you are using SQL Server 2000 take a look at partitioned views - not quite so friendly as partitioned tables, but good never the less. This is really a view over multiple tables, so table structure changes are a bit of a pain, but done properly you can insert into the view and it will add the record to the correct underlying table.
In either case you will be able to have a single stored proc to insert, and, although SQL Server will cache ad-hoc sql execution plans I would be very reluctant to use anything other than stored procs for data access. They provide a good degree of security against sql injection and are a single source of data, so any changes are abstracted from your code.
Hope some of this makes sense and helps.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
Hi There
Thanks for a gr8 site.
One of the most important aspects of being a developer is testing. No matter what the language is. And sometimes there are a few bugs that <i>slip</i> in and my application goes to production.
I need to improve on my Testing and Devepopemtn tech. Do any of you know of any books (hard/soft copy) that can help developers in this aspect?
I would really appreciate it.
Thanks.
Chris
|
|
|
|
|
Try google?
"I guess it's what separates the professionals from the drag and drop, girly wirly, namby pamby, wishy washy, can't code for crap types." - Pete O'Hanlon
|
|
|
|
|
I am collaborating on a Visual Studion 2005 project with a German colleague. The application is being published in Germany, but he (in Germany) and I (in the US) work on different parts of the overall project to suit the needs of our constituency.
We have noticed a new problem lately. He wrote a routine to create a text file that we can both use. During the running of the code, however, I get a trapped error trying to create the file, while in Germany the routine works fine.
Stepping through the sub-routine it turns out the error is being generated because when the application is published to the German server, a file path to my colleague's Documents and Settings folder is being referenced. The program only shows me this in debug mode, and neither of us is sure how a Visual Studio program written in VB stores this kind of data.
Is anyone aware of the mechanics of this?
|
|
|
|
|
Mike Nelson wrote: I get a trapped error
So you don't think the error message might help us to help you? Is that why you didn't post it?
Mike Nelson wrote: is published to the German server
Is this an ASP.NET application?
led mike
|
|
|
|
|
This is primarily a desktop application but is published to an ftp site so that each time a user starts the app if there is an internet connection, it goes and searches for updates to the program.
|
|
|
|
|
Ok it's a desktop app. that helps. Now what about the error message?
led mike
|
|
|
|
|
Sorry for the delay in responding -- I'm a one man shop and got tied up in an entirely different project. Here's what I get when I step through the relevant subroutine:
First, I get this:
"The source file is different from when the module was built. Would you like the debugger to use it anyway?" The paths shown are:
Source file: C:\Program Files\Microsoft Visual Studio 8\CAPS Project\Src 1-21-08\CAPSpro.NET\frmDocManager.vb
Module: C:|Program Files\Microsoft Visual Studio 8\CAPS Project\Src 1-21-08\CAPSpro.NET\bin\CAPSpro.exe
Click Yes:
Then a message comes up telling it cannot find the file, showing a path which is the path of the German developer who originally published the program ("The file 'C:\Dokumente und Einstellungen\teg\Eigene Dateien\CAPS\Src\VB2005\CAPSpro.NET\frmDocManager.vb' does not exist.").
On one occasion I was able to change the path to my local machine where I am working on the source code, and the output file was successfully saved. Without the debugger, however, there is no message that the file cannot be found. This only appears when I step through the debugger one line at a time.
|
|
|
|
|