|
Eddy Vluggen wrote: SQLite is file-based; but that doesn't make it slow.
I know - What I think is that it becomes slow if I load and save big photo raw data frequently.
I will never again mention that Dalek Dave was the poster of the One Millionth Lounge Post, nor that it was complete drivel.
How to ask a question
|
|
|
|
|
Marco Bertschi wrote: What I think is that it becomes slow if I load and save big photo raw data frequently. Why?
Databases (both local and server) are meant to handle data. That includes blobs.
Yes, modifying a blob might be slower than a file-system; especially if the blob is doubled in size. Still, I did not notice any delay when I hooked up the dokan-driver to Sql Server. That made my db with blobs accesible using explorer, and I created, opened and dropped the connection on every read. Reading a blob is done in 4k pieces. Copying a file to/fro the DB still happened at 40 MB/sec (the max for that driver). It was fast enough on my testmachine (with 1 Gb RAM) to simply unzip a large file IN the folder that was linked to a table.
Depending on your network, it may be very expensive to fetch a large blob (say 250 Mb on a 10 Mb network). In that case you mimick IIS - you fetch a picture once, and if it's fetched again you send a HTML 304.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Eddy Vluggen wrote: Depending on your network, it may be very expensive to fetch a large blob (say 250 Mb on a 10 Mb network). In that case you mimick IIS - you fetch a picture once, and if it's fetched again you send a HTML 304.
Shouldn't be a big problem, since I plan on using the database locally.
I will never again mention that Dalek Dave was the poster of the One Millionth Lounge Post, nor that it was complete drivel.
How to ask a question
|
|
|
|
|
If you plan on using it locally you can just as well use SQLite, concurrency won't be a problem then.
And if you plan on moving it to a separate server later you will anyway have to take network problems into consideration already now.
|
|
|
|
|
Jörgen Andersson wrote: concurrency won't be a problem then.
It will be, since a user may run more than one instance of the software, or I'm going to use multiple threads to access the database.
I will never again mention that Dalek Dave was the poster of the One Millionth Lounge Post, nor that it was complete drivel.
How to ask a question
|
|
|
|
|
How many threads are we talking about, or rather, how many writes are we talking about? The problem SQLite has with concurrency is that it locks the file rather than the row or block when you're doing DML on it.
But every operation takes only milliseconds, so timeouts aren't likely.
It's hard for me to believe that you could overload it with the use you specified earlier.
|
|
|
|
|
Jörgen Andersson wrote: It's hard for me to believe that you could overload it with the use you specified earlier.
No, that isn't the problem I see - But later there might be a Web service in front of the DB, and many async write operations may deadlock SQLite - As you said, MariaDB is the way to go.
I will never again mention that Dalek Dave was the poster of the One Millionth Lounge Post, nor that it was complete drivel.
How to ask a question
|
|
|
|
|
Ah, there you go, MariaDB it is then.
|
|
|
|
|
Marco Bertschi wrote: But later there might be a Web service in front of the DB,
Just noting that you seem to be suggesting that you can write both a client application and a web service application and somehow everything will be the same.
Although that can be done the complications involved are significant. Those complications seldom add anything to the functionality of either. And doing it successfully without prior experience is going to be a challenge as well.
|
|
|
|
|
Marco Bertschi wrote: It will be, since a user may run more than one instance of the software Doesn't "have" to be; there might as well be a Windows-service that works as a DAL - and have your app interface with that.
Still, for multiple users one would recommend a *server*-database, complete with a server. It'd be a bit overkill to add that functionality yourself. And yes, that will introduce network-latency, but still does not mean that you need a second server-instance at the client.
Store cache the static data locally; that'd be the pictures. Generate a hash to see what's modified. Put the relational info in the server, and access that without caching. Lots of webapps have a webservice-layer on top of that DB and perform decent - so I'd assume that any app querying directly would be faster. You could then omit the client-db completely and cache the images on the filesystem.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Marco Bertschi wrote: Since I will store big amounts of data
Where "big" means that the image size is 100 gigs or 100k? And there are 1 billion images or 1000? What is the projected growth rate?
Marco Bertschi wrote: I I'm in doubt whether the MongoDB will really give me any performance boosts
What makes you think that you need a performance boost? How many requests does your business model realistically require per second? What is the distribution of the type of requests within an average hour?
|
|
|
|
|
jschell wrote: Where "big" means that the image size is 100 gigs or 100k? And there are 1 billion images or 1000? What is the projected growth rate?
"Big" in relation to a single entry in a table - A single RAW image can easily add up to more than 10 MB (which is big in comparison to other data, e.g. plain text)..
I will never again mention that Dalek Dave was the poster of the One Millionth Lounge Post, nor that it was complete drivel.
How to ask a question
|
|
|
|
|
What is the expected average size? How many will there be? What is the estimated growth rate?
|
|
|
|
|
Greetings,
I'm looking to create a basic DB for my small media production business. Think content similar to old Northwind DB. Were my partner and I both using Windows, I would just use Access but I use Mac so that is not an option other than using Bootcamp which I would rather not do.
I was looking at MySQL and PostgreSQL and not sure which to select. Pros/cons etc. I'm looking to hopefully store the DB online on my web server and create an Objective-C (Cocoa) app or web app (PHP?) to query, insert, update etc. (I know this is DB forums, just giving context). Let me know if clarification is needed.
Thank you!
|
|
|
|
|
There's always Filemaker as a fairly close substitute for Access, having more or less the same drawbacks. So nothing I personally would recommend.
If an embedded database is enough for you can check out on this[^] page if SQLite is good enough for you.
If you indeed need a "real" database you can read a concise but good comparison between MySQL and PostgreSQL here[^].
|
|
|
|
|
Sounds like SQLite will work perfect, plus easy integration with PHP should make my life easy. Thanks a lot!
|
|
|
|
|
Hello please i would ask you if you can help me there, thank you in advance
if we have two dataset 1-First dataset contains population data of cities in the United States of America. Each record in the dataset has the following format:
{
"city": "ACMAR",
"loc": [
-86.51557,
33.584132
],
"pop": 6055,
"state": "AL",
"_id": "35004"
}
Second dataset contains log data from a geo-locational service. Each record in the dataset has the following format:
{
"_id": {
"$oid": "5191f53b1b76a5666a8cbd64"
},
"status": 200,
"requrest_time": 504,
"type": "stationboard",
"datetime": {
"$date": "2014-02-19T07:03:32.000Z"
}
}
1-how we can Get all the states that have at least 2 cities with population > 1000000 (1M).and we need to Remember that city can be splitted into multiple records, and we should aggregate its population first. (zips dataset)
2- we need Find a day (date) with the largest number of HTTP 500 errors (status=500).
|
|
|
|
|
See: Mango DB[^]
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
Thank you for your advice but i need a bit help with code , can u please?
|
|
|
|
|
You can start by asking the question only once
=========================================================
I'm an optoholic - my glass is always half full of vodka.
=========================================================
|
|
|
|
|
Hello please i would ask you if you can help me there, thank you in advance
if we have two dataset 1-First dataset contains population data of cities in the United States of America. Each record in the dataset has the following format:
{
"city": "ACMAR",
"loc": [
-86.51557,
33.584132
],
"pop": 6055,
"state": "AL",
"_id": "35004"
}
Second dataset contains log data from a geo-locational service. Each record in the dataset has the following format:
{
"_id": {
"$oid": "5191f53b1b76a5666a8cbd64"
},
"status": 200,
"requrest_time": 504,
"type": "stationboard",
"datetime": {
"$date": "2014-02-19T07:03:32.000Z"
}
}
1-how we can Get all the states that have at least 2 cities with population > 1000000 (1M).and we need to Remember that city can be splitted into multiple records, and we should aggregate its population first. (zips dataset)
2- we need Find a day (date) with the largest number of HTTP 500 errors (status=500).
|
|
|
|
|
Ah MongoDB is one of these new fangled document databases and does not use SQL, I can only suggest you get into the documentation. Most of us use traditional relational database and can help with SQL queries.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Mycroft Holmes wrote: I can only suggest you get into the documentation
Or the school books.
|
|
|
|
|
A Quick Google shows me this one as the top reference
SQL Comparison for Mango DB[^]
Every day, thousands of innocent plants are killed by vegetarians.
Help end the violence EAT BACON
|
|
|
|
|
Is there any utility or mode to convert a sql server database to a lower version.
For example From sql server 2008r2 to sql server 2005 ?
Thank you !
|
|
|
|