|
|
Thanks for the link to the great article i.j.russell, I'm still reading and learning.
Ah, I see, my description of the question wasn't very good. I should expand a bit on what I mean by profile. A better description would be an item where multiple, think thousands or tens of thousands, of these items are associated with a single tenant.
The new software being developed copied a lot of code from the old version where an item, single file, was represented as a database using an API provided by an antiquated database product.
The new software is trying to use this same scheme in SQL Server, one database per item. Since there can be tens of thousands of items, there will be tens of thousands of databases. I can hear the murderous laughter of database administrators everywhere.
I think this scheme breaks a number of well-established guidelines but I’d like to give the product owner ammunition, concrete definitive documentation, to force the developers to change the product to something more reasonable. (Oh, and I’m one of the developers – just showing up late in the development cycle of the product.) The product owner is not very technical and so may have been convinced for some reason or other that this was an okay solution.
|
|
|
|
|
The other option to consider is going the NoSql route and using something like MongoDb, CouchDb or even cloud-based storage like Microsoft Azure Table Storage or Amazon SimpleDb.
|
|
|
|
|
Good options, though I'm pretty sure they are going to use SQL Server no matter what. The customers of the product are typically Microsoft shops so they really want SQL Server.
I just can't imagine how the developers convinced the product owner that creating tens of thousands of databases would be okay.
|
|
|
|
|
|
Exactly, 'technically possible' but a horrendous idea. I'm guessing this is how development 'convinced' the product owner.
Any good web pages to convince them that this is a ridiculous idea?
|
|
|
|
|
I can't find any sites that would help, but that in itself is helpfull because it means that the concept is either uber-cutting edge or plain stupid. Since Sql Server has been around for well over a decade, I am going for the latter.
From a technical perspective, I can think of a few issues;
1. Your hardware requirements are going to be well in excess of what your application data should need. Your data should be easily handled by a single Sql Server Express database.
2. Connection pooling is going to be interesting as each database will have a different connection string. Performance will be adversely affected as new connections are created.
3. Whilst you can carry out DDL in a stored procedure to create a database, table(s) and indexes, this will take a few seconds each time.
|
|
|
|
|
I'd go with i.j's plain stupid scenario, this is so far outside sensible that it would not be entertained by any reasonable designer.
Consider a code base change to the database, maintenance will be an absolute nightmare and the cost of maintaining an app is much higher than developing the app.Never underestimate the power of human stupidity
RAH
|
|
|
|
|
The key to your argument should be about cost; each cost can then have a technical answer to the question - for example, annual maintenance cost is $500k as we need to employ a team of DBA's to manage 1000's of databases
Give your suggested solution with a cost, give his solution with it's associated costs. You can always ham it up a bit to ensure that your solution looks better.
This will get his interest far more than technical discussions on the merits or otherwise of solutions.
|
|
|
|
|
i have a database model that has an isbn as the primary key
The isbn is input by the user
it is the primary key for book and book is linked to book_by_authors on the isbn number
and it is also linked to bookcategory by the isbn and so forth.
How in the the world could I update the isbn easily? I keep getting constraint issues with the foreign keys. Should I take and create a method that stores the existing values and then deletes the record then turns around and adds the record back with the corrected data? just wondering what should be done in this case. thanks
|
|
|
|
|
Perhaps you'd be better using a surrogate key rather than the ISBN if you want to allow users to edit them (you'd still have to check for collisions if they can manually alter an ISBN).Tychotics: take us back to the moon
"Life, for ever dying to be born afresh, for ever young and eager, will presently stand upon this earth as upon a footstool, and stretch out its realm amidst the stars."
H. G. Wells
|
|
|
|
|
Well you're screwed, a primary key that is dependent on your user input BBrrrttt WRONG.
This is one of the most basic errors in database design. Like using a phone number or email address or ANY other user data as a PK/FK. Keys should be stupid, they should have absolutely NO other function than to maintain your data structure.
Solution - create another field,BookID int with identity on your book table, everywhere you use the ISBN go there and add the new field and populate it based on existing structures (isbn). Now change you FKs to the BookID fields, now remove the FKs for the isbn. NOW you can edit your ISBN.
It is a lot of work but you have to do it to correct the design error. Never put off fixing these errors, they get into the corners of your database and breed and shortly you are totally f***ed.Never underestimate the power of human stupidity
RAH
|
|
|
|
|
hello every one!
i was wrote a storedprocedure and execute it (in sqlserver 2005)
this sp will get some data as parameters and after some complicated transactions delete some rows of the tables but it will complete with the following error
"Msg 2754, Level 16, State 1, Procedure sp_InsertHistoryTbl, Line 30
Error severity levels greater than 18 can only be specified by members of the sysadmin role, using the WITH LOG option."
1) how can i get the error message as an admin?
2) dose any one know about this failure?
3) is there any list of error numbers (for example a list that has a table with error numbers and error messages and perhaps it`s cause and etc)?
please note me any thing about these
thanks!
|
|
|
|
|
So did you do some research or just come straight to CP. I found these responses[^] and it look like you have a locking issue, some research may provide you with a solution! Never underestimate the power of human stupidity
RAH
|
|
|
|
|
thanks for your attention
yes u right may be its better to googeling first !
|
|
|
|
|
Did you find a solution? Never underestimate the power of human stupidity
RAH
|
|
|
|
|
yes i did
i was wrote a rise error with 27 severity level !!!!
shame on me!!!!
|
|
|
|
|
Hello,
I need to write a script that creates a table variable with a column for every instance of a row in a table. The data in the table it pulls from may change from time to time, so I'm looking for a dynamic approach.
Example table:
ColumnA: ColumnB:
1 Hello
2 World
Then my table variable declaration should look like:
DECLARE @TableVariable TABLE
(
Hello varchar(20),
World varchar(20)
)
Can anyone think of a performance-friendly solution to this?
Thanks,
Kevin
|
|
|
|
|
I thing you should dynamically create an sql string that creates the table.
On the other hand, why would you want a to create a column for each row? suppose. Keep in mind that SQL Server has a max column limit of 255(or 256) per table.
|
|
|
|
|
Kevin Leeds wrote: Can anyone think of a performance-friendly solution to this?
Yes.. I think.
I encounted a similar issue this week and solved it by using a pivot table. There is an excellent article on hear that gives a workable example.
Pivot two or more columns in SQL Server 2005[^]
Hope this helps...
Tim
|
|
|
|
|
Glad it was useful. Always a buz when an article is used as reference (by someone other than the author ) Never underestimate the power of human stupidity
RAH
|
|
|
|
|
A pivot may do the job for you, organsing it will be a challenge.
Dynamic SQL is more straight forward EXCEPT if you use EXEC (@SQL) to create the table var it is created in a different proc domain and therefore not available to you. Creating a global ##Table will fix this but it is not recommended in a frequently called proc.Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Thanks for the responses. I wish I could take the time to test these, but we have decided to move on - not making this a dynamic proc.
|
|
|
|
|
SELECT PD_Users.FirstName as FirstName, PD_Users.LastName as LastName, PD_Users.UserName as UserName, PD_Users.CityID as CityID,
PD_Cities.CityName as CityName, aspnet_Membership.CreateDate as CreateDate,
(SELECT COUNT(*) FROM PD_Articles WHERE (PD_Users.UserName = UserName)) AS ArticleCount,
(SELECT COUNT(*) FROM PD_News WHERE (PD_Users.UserName = SenderName AND IsDevNews = 1)) AS NewsCount,
(SELECT COUNT(*) FROM PD_News WHERE (PD_Users.UserName = SenderName AND IsDevNews = 0)) AS ITNewsCount,
(SELECT COUNT(*) FROM PD_ForumMessage WHERE (PD_Users.UserName = UserName)) AS MessageCount,
CASE WHEN aspnet_Users.LastActivityDate > @DateActive THEN cast(1 as bit) ELSE cast(0 as bit) end AS IsOnline,
ROW_NUMBER() OVER (ORDER BY aspnet_Membership.CreateDate ) AS RowNumber
FROM aspnet_Users INNER JOIN aspnet_Membership ON aspnet_Users.UserId = aspnet_Membership.UserId INNER JOIN
PD_Users INNER JOIN PD_Cities ON PD_Users.CityID = PD_Cities.CityId ON aspnet_Users.UserName = PD_Users.UserName
WHERE (@LastName = PD_Users.LastName OR (@LastName = '' AND 1=1))
AND (@UserName = PD_Users.UserName OR (@UserName = '' AND 1=1))
AND (@CityID = PD_Users.CityID OR (@CityID = -1 AND 1=1))
AND (@UserStatus = (CASE WHEN aspnet_Users.LastActivityDate > @DateActive THEN cast(1 as bit) ELSE cast(0 as bit) end) OR (@UserStatus = 0 AND 1=1))
ORDER BY PD_Users.FirstName DESC
I got the error: the order by clause is invalid in views inline functions ...
do you have any idea?
|
|
|
|
|
Have you tried typing the error message into Google? I did and I got 94,000 results; Maybe one of those could be of help?
|
|
|
|