|
Once, in my previous company, in a system that someone else maintained, all primary keys and relationship were removed because "they causes problems when they need to patch data (due to bugs such as multiple rows were inserted) by using scripts"......
|
|
|
|
|
darkelv wrote: "they causes problems when they need to patch data (due to bugs such as multiple rows were inserted) by using scripts"......
That management should actually adopt the policy of abolishing all SQL Server licenses and prohibiting RDBMS in thier realm. They can simply live with plain vanilla text files which would save them pretty good bucks from software license costs, recurring DBA charges and more.
Vasudevan Deepak Kumar
Personal Homepage Tech Gossips
A pessimist sees only the dark side of the clouds, and mopes; a philosopher sees both sides, and shrugs; an optimist doesn't see the clouds at all - he's walking on them. --Leonard Louis Levinson
|
|
|
|
|
Vasudevan Deepak K wrote: That management should actually adopt the policy of abolishing all SQL Server licenses and prohibiting RDBMS in thier realm.
Maybe that management consists of a bunch of drunken lemurs
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
The scary part is that they're a Microsoft Certified Gold Partner.
|
|
|
|
|
Well that explains everything then ! It is by design !!
|
|
|
|
|
At Microsoft, it's never a bug. It's just a Microsoft Certified "Gold" Feature...:P
|
|
|
|
|
That made my day!
|
|
|
|
|
Enforcing referential integrity takes clock cycles, and this is where you end up getting into a battle with DBAs. A DBA will typically point out that it is up to your application to ensure integrity, but you argue back that you have the tools in the database to do it - so why not let the database do what it is designed for? In some cases, the DBA has a point because they have a legacy database where the referential integrity checking is a real kludge (i.e. slow). In more modern DBs though, referential integrity is performed much quicker (generally by using a quick index scan).
Now, the issue becomes how to react to a referential integrity problem and this becomes an architectural issue. If you leave it to the database to inform you then you've gone through the whole process of submitting the data and waiting for the database to verify (or not) that the operation has succeeded. If it fails, you have to notify the user/do some remedial work. If your application checks the integrity though, then theoretically this becomes less of an issue. There is a problem with this line of thinking though - you could only guarantee this if the database were single user; in the time between you performing the check and you actually attempting the insert (or update), the record could have been deleted at which point you've broken the integrity rules. Another issue boils down to this - if you leave it to your code to check the integrity then EVERY update/insert/delete statement must check the integrity (and in the case of deletes this can be across multiple tables - which means your selects must be redone everytime a new table is added into the referential mix).
Bottom line - the DB provides the tools to do this. It's efficient, and means you don't have to worry about forgetting to perform a referential check.
|
|
|
|
|
Pete O'Hanlon wrote: Bottom line - the DB provides the tools to do this. It's efficient, and means you don't have to worry about forgetting to perform a referential check.
There must be a harmonious combination of the application and the database to minimize the different heart-burns.
Vasudevan Deepak Kumar
Personal Homepage Tech Gossips
A pessimist sees only the dark side of the clouds, and mopes; a philosopher sees both sides, and shrugs; an optimist doesn't see the clouds at all - he's walking on them. --Leonard Louis Levinson
|
|
|
|
|
They should both do the checks. Your application should send the type of information the database is wanting, and the database should expect a specific type of data.
The best way to accelerate a Macintosh is at 9.8m/sec² - Marcus Dolengo
|
|
|
|
|
Expert Coming wrote: They should both do the checks.
I disagree.
Expert Coming wrote: the database should expect a specific type of data.
It should expect nothing. Like any code, the caller should never be trusted (unless of course you are the guaranteed only caller).
|
|
|
|
|
Expect isn't the right word, but I do think that the database needs to know what it is storing, and the application needs to know what kind of data the database wants.
The best way to accelerate a Macintosh is at 9.8m/sec² - Marcus Dolengo
|
|
|
|
|
Pete O'Hanlon wrote: it is up to your application to ensure integrity
Yeah, on a previous job (using RDB on OpenVMS) we had referential integrity on the dev systems only; the code was expected to be correct and fully-tested before it was deployed to production, so the database needn't check.
They also said that metadata slows down the database, so I wasn't allowed to create functions in the database.
Now that I get to use SQL Server, I do set up referential integrity... but turning on cascaded deletes still feels like cheating.
|
|
|
|
|
PIEBALDconsult wrote: but turning on cascaded deletes still feels like cheating
It feels dirty - so dirty. And it's one of the reasons we don't do deletes - we use statuses to control whether a record is visible or not (and that way we don't worry about accidentally deleting something important).
|
|
|
|
|
Pete O'Hanlon wrote: don't do deletes
I agree with that.
|
|
|
|
|
In my line of work speed is not such an issue as robustness and solution being as error proof as possible. So I think database and application which uses it must both be able to gracefully handle whatever crap is thrown at them (i.e. checks on both sides).
That works for me and is my opinion based on experiences so far. Of course I'm always opened to well argumented ideas.
modified 19-Nov-18 21:01pm.
|
|
|
|
|
Pete O'Hanlon wrote: DBA
Dumb bloody A$$@#$!@'s
|
|
|
|
|
More along the lines of "does b*gger all".
|
|
|
|
|
True also
|
|
|
|
|
Philip Laureano wrote: Is it a horror, or not? And if it isn't a horror, why would you say it isn't?
Are you the person that has to clean up the data once it is a mess? The answer to this question is the same as to your one above...
|
|
|
|
|
Philip Laureano wrote: Is it a horror, or not?
Yes, it is.
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
Philip Laureano wrote: and not even the upper management knew about it.
As if they know what is under the code
I've to come across upper management who know what goes on in the code. Of course there some, but those are exceptions
try<br />
{<br />
<br />
}<br />
catch (UpperManagmentException ex)<br />
{<br />
<br />
}
/* I can C */
// or !C
Yusuf
|
|
|
|
|
Compile Time Error #OMGWTF error UpperManagmentException can never be caught.
Otherwise [Microsoft is] toast in the long term no matter how much money they've got. They would be already if the Linux community didn't have it's head so firmly up it's own command line buffer that it looks like taking 15 years to find the desktop.
-- Matthew Faithfull
|
|
|
|
|
dan neely wrote: can never be caught
Even when they do get caught they get big bonuses. (boni?)
|
|
|
|
|
I believe it is. You're leaving data integrity up to the USERS! Are you kidding me!?
Data integrity is everything. You can always index and perform other optimizations if you want to speed things up, even throw hardware at it if necessary, but fixing corrupted data is a nightmare with no easy solution. Given the horsepower of modern systems, there's no excuse for not using this important feature.
|
|
|
|