|
so in this case you pefer to do it through SPs?
Technology News @ www.JassimRahma.com
|
|
|
|
|
Hi,
Is is possible to create a trigger ON SELECT in MySQL?
If not, any other way to do it?
Thanks,
Jassim
Technology News @ www.JassimRahma.com
|
|
|
|
|
Jassim Rahma wrote: Is is possible to create a trigger ON SELECT in MySQL?
Only 3 events INSERT, UPDATE & DELETE. CREATE TRIGGER Syntax[^] & its Syntax and Examples[^]
Jassim Rahma wrote: If not, any other way to do it?
What do you want? explain clearly(ON SELECT).
thatrajaCode converters | Education Needed
No thanks, I am all stocked up. - Luc Pattyn
When you're wrestling a gorilla, you don't stop when you're tired, you stop when the gorilla is - Henry Minute
|
|
|
|
|
Jassim Rahma wrote: Is is possible to create a trigger ON SELECT in MySQL? No. That's why you added the second question, ain't it?
Jassim Rahma wrote: If not, any other way to do it? No. You can fetch from a SP and add that functionality yourself. Triggers aren't meant as an audit-trail, that's hardly secure.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Dear all,
I'm currently trying to make a DB system scratch which I can use as backbone for my Qt application. Since I will store big amounts of data [photo RAW data] in there, I figured that a classic relational DB might not be the best solution. What I'm planning is now a classic relational DB [MariaDB] as persistent storage, cached by a No-SQL Database [MongoDB].
In the end the whole system shall be built up like this.
Here's a little additional explanation:
Persistent Storage Access
The Persistent Storage Access handles the access to the persistent MariaDB database which holds all the application data. It supplies the data used to build the Cache and if there are Update or Creation Queries, but also for Get Queries if the cache can't supply the requested item.
Cache Access
The Cache Access handles the access to the MongoDB database which is used to cache the access to the Persistent Storage. It contains a copy of the most recently added, updated or requested items.
Data Access API
The Data Access API coordinates access between Cache and Persistent Storage. If an item requested can't be found in the cache, the DAA gets it from the Persistent Storage and also adds it to the Cache. If an item is added, the DAA saves it into the Persistent Storage and also adds it to the cache. All this coordination is needed to take out complexity from other parts, e.g. the business classes.
I I'm in doubt whether the MongoDB will really give me any performance boosts, so if anyone could share their thoughts on what I plan here?
I will never again mention that Dalek Dave was the poster of the One Millionth Lounge Post, nor that it was complete drivel.
How to ask a question
|
|
|
|
|
The size of the database does not have very much to do with the performance. (Also remember that they have memory caching built in.)
The question is how you're going to use it.
The big weakness with MySQL/MariaDB is joins and aggregations with lots of data and a lot of tables. And I doubt that's how you're going to use it if you're having an image database. Fetching single items or ranges from an indexed table is actually where MySQL/MariaDB excels. Keep in mind that Google actually uses a (highly modified) version of MySQL.
My key rule for performance is that it's the amount of data you need to handle at a given moment that matters.
So tell us how you're going to use it first.
|
|
|
|
|
I have started an article[^] on the program I want to write.
The idea behind it that I can assign photos to albums - One photo can be part of any amount of albums, and can have any numbers of tags assigned.
Furthermore, the photos shall be versioned, e.g. there is the base (original) photo and several versions on which photoshop was applied.
Possible queries are (apart from the usual insert etc..):
- Get all child versions of a photo
- Get all photos for a given album
- Get all albums a photo appears in
- Get the parent versions of a photo
- Get tags assigned to a photo
- Get photos which have tags X & Y [...] assigned
- Get photos which have tags x & z assigned, but not Y
I will never again mention that Dalek Dave was the poster of the One Millionth Lounge Post, nor that it was complete drivel.
How to ask a question
|
|
|
|
|
I think you don't need the cache server, in your case it only adds to the complexity without adding much functionality.
Had you been working with large queries that aggregates down to small data volumes <edit>that are reused</edit>, it would be a completely different situation.
I also think you're a bit wrong about SQLite. It's a file database alright, but so are (almost) all databases. The problem is that it's a SINGLE file database, so when it has grown out of the harddisk(s) on the server you're in trouble. But the real problem it has is that it's bad with concurrency.
So assuming that you anticipate some growth I agree with your choice of MariaDB for this purpose.
modified 9-Apr-14 5:34am.
|
|
|
|
|
Jörgen Andersson wrote: So assuming that you anticipate some growth I agree with your choice of MariaDB for this purpose.
I definately anticipate a big growth, since the image raw data will be contained in the database - A single raw image data file is 20-30 MB easily.
I will never again mention that Dalek Dave was the poster of the One Millionth Lounge Post, nor that it was complete drivel.
How to ask a question
|
|
|
|
|
SQLite is file-based; but that doesn't make it slow. And no, it's not file-based like FoxPro where each object is a file. It's a friggin' fast single-file database, ideal for local caching.
You should also take in account that servers and local databases work quite differently.
..and no, you should not need to cache the db-server (unless your caching the blobs locally). If it's that slow then you need to revisit your datamodel.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Eddy Vluggen wrote: SQLite is file-based; but that doesn't make it slow.
I know - What I think is that it becomes slow if I load and save big photo raw data frequently.
I will never again mention that Dalek Dave was the poster of the One Millionth Lounge Post, nor that it was complete drivel.
How to ask a question
|
|
|
|
|
Marco Bertschi wrote: What I think is that it becomes slow if I load and save big photo raw data frequently. Why?
Databases (both local and server) are meant to handle data. That includes blobs.
Yes, modifying a blob might be slower than a file-system; especially if the blob is doubled in size. Still, I did not notice any delay when I hooked up the dokan-driver to Sql Server. That made my db with blobs accesible using explorer, and I created, opened and dropped the connection on every read. Reading a blob is done in 4k pieces. Copying a file to/fro the DB still happened at 40 MB/sec (the max for that driver). It was fast enough on my testmachine (with 1 Gb RAM) to simply unzip a large file IN the folder that was linked to a table.
Depending on your network, it may be very expensive to fetch a large blob (say 250 Mb on a 10 Mb network). In that case you mimick IIS - you fetch a picture once, and if it's fetched again you send a HTML 304.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Eddy Vluggen wrote: Depending on your network, it may be very expensive to fetch a large blob (say 250 Mb on a 10 Mb network). In that case you mimick IIS - you fetch a picture once, and if it's fetched again you send a HTML 304.
Shouldn't be a big problem, since I plan on using the database locally.
I will never again mention that Dalek Dave was the poster of the One Millionth Lounge Post, nor that it was complete drivel.
How to ask a question
|
|
|
|
|
If you plan on using it locally you can just as well use SQLite, concurrency won't be a problem then.
And if you plan on moving it to a separate server later you will anyway have to take network problems into consideration already now.
|
|
|
|
|
Jörgen Andersson wrote: concurrency won't be a problem then.
It will be, since a user may run more than one instance of the software, or I'm going to use multiple threads to access the database.
I will never again mention that Dalek Dave was the poster of the One Millionth Lounge Post, nor that it was complete drivel.
How to ask a question
|
|
|
|
|
How many threads are we talking about, or rather, how many writes are we talking about? The problem SQLite has with concurrency is that it locks the file rather than the row or block when you're doing DML on it.
But every operation takes only milliseconds, so timeouts aren't likely.
It's hard for me to believe that you could overload it with the use you specified earlier.
|
|
|
|
|
Jörgen Andersson wrote: It's hard for me to believe that you could overload it with the use you specified earlier.
No, that isn't the problem I see - But later there might be a Web service in front of the DB, and many async write operations may deadlock SQLite - As you said, MariaDB is the way to go.
I will never again mention that Dalek Dave was the poster of the One Millionth Lounge Post, nor that it was complete drivel.
How to ask a question
|
|
|
|
|
Ah, there you go, MariaDB it is then.
|
|
|
|
|
Marco Bertschi wrote: But later there might be a Web service in front of the DB,
Just noting that you seem to be suggesting that you can write both a client application and a web service application and somehow everything will be the same.
Although that can be done the complications involved are significant. Those complications seldom add anything to the functionality of either. And doing it successfully without prior experience is going to be a challenge as well.
|
|
|
|
|
Marco Bertschi wrote: It will be, since a user may run more than one instance of the software Doesn't "have" to be; there might as well be a Windows-service that works as a DAL - and have your app interface with that.
Still, for multiple users one would recommend a *server*-database, complete with a server. It'd be a bit overkill to add that functionality yourself. And yes, that will introduce network-latency, but still does not mean that you need a second server-instance at the client.
Store cache the static data locally; that'd be the pictures. Generate a hash to see what's modified. Put the relational info in the server, and access that without caching. Lots of webapps have a webservice-layer on top of that DB and perform decent - so I'd assume that any app querying directly would be faster. You could then omit the client-db completely and cache the images on the filesystem.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Marco Bertschi wrote: Since I will store big amounts of data
Where "big" means that the image size is 100 gigs or 100k? And there are 1 billion images or 1000? What is the projected growth rate?
Marco Bertschi wrote: I I'm in doubt whether the MongoDB will really give me any performance boosts
What makes you think that you need a performance boost? How many requests does your business model realistically require per second? What is the distribution of the type of requests within an average hour?
|
|
|
|
|
jschell wrote: Where "big" means that the image size is 100 gigs or 100k? And there are 1 billion images or 1000? What is the projected growth rate?
"Big" in relation to a single entry in a table - A single RAW image can easily add up to more than 10 MB (which is big in comparison to other data, e.g. plain text)..
I will never again mention that Dalek Dave was the poster of the One Millionth Lounge Post, nor that it was complete drivel.
How to ask a question
|
|
|
|
|
What is the expected average size? How many will there be? What is the estimated growth rate?
|
|
|
|
|
Greetings,
I'm looking to create a basic DB for my small media production business. Think content similar to old Northwind DB. Were my partner and I both using Windows, I would just use Access but I use Mac so that is not an option other than using Bootcamp which I would rather not do.
I was looking at MySQL and PostgreSQL and not sure which to select. Pros/cons etc. I'm looking to hopefully store the DB online on my web server and create an Objective-C (Cocoa) app or web app (PHP?) to query, insert, update etc. (I know this is DB forums, just giving context). Let me know if clarification is needed.
Thank you!
|
|
|
|
|
There's always Filemaker as a fairly close substitute for Access, having more or less the same drawbacks. So nothing I personally would recommend.
If an embedded database is enough for you can check out on this[^] page if SQLite is good enough for you.
If you indeed need a "real" database you can read a concise but good comparison between MySQL and PostgreSQL here[^].
|
|
|
|