|
Well, users will be working on confidential data. I have been thinking of more possible threats and thought that a user may give his/her login details to someone not supposed to be part of the work who may connect from another system just to influence the results in their favour. jschell, do you think it cannot happen?
|
|
|
|
|
Dan_K wrote: Well, users will be working on confidential data.
Define what type of "users". A help desk person? Operations? A consumer?
Dan_K wrote: I have been thinking of more possible threats and thought that a user may give
his/her login details to someone not supposed to be part of the work who may
connect from another system just to influence the results in their favour.
I don't know what the last part of that means but as for the first exactly what are you doing to protect from the following scenario?
User A gives their credentials to X.
User A works on something then logs off.
User X logs on then does something 'wrong' then logs off.
|
|
|
|
|
Your client can place locking records in your database to point out that a user has loged in (like semaphore locks). At closure of the client the record must be removed.
In our system we have a special server application that checks every minute whether a lock is still valid, and otherwise removes it. Upon a crash of a client you wont have a lockout for long.
To be able to see who has accessed the data, you could make an audit system:
Log which user signed in from which computer, what he has done etc.
This can be done by various methods:
- the client app writing extra records
- triggers in the database
- the higher versions of SQLServer have tables that store information about performance of performed queries etc. Maybe those are helpfull also.
- ...
Regards ... OttO
|
|
|
|
|
Hi Dan, I don't know what programming language you are using, but here is the logic. I'm sure you can apply it on any language;
1. Create a shared folder with read/write authorization for Everyone on the server.
2. In my apps, every user has their own GUID primary key on the database.
3. Suppose one user has a primary key 0D5FF2C1-9710-47EF-86FE-164E6170EEFD . Each time this user log in to the system, check if there is a text file named 0D5FF2C1-9710-47EF-86FE-164E6170EEFD.TXT in the shared folder on the server. If it doesn't exist, go to step (4). If it exists, try to delete it. If the deletion success, then continue to step (4). If the deletion fail, then other workstation is using this login ID is being used. You might want to show some message and redirect the program flow to the exit routine.
4. Create a text file named 0D5FF2C1-9710-47EF-86FE-164E6170EEFD.TXT on the server and keep it open while the apps is open (don't close the file handle).
5. Upon exit, close the TXT file handle and delete the text file.
By locking the handle, as long as the user is login, other client could not delete the file. If the initial client crash somehow, then the lock to the text file would be released by the server OS, thus, other workstation would be able to delete the text file (and login with the user id).
Others might have better solution. Yes, this is not a SQL solution. And might not be elegant enough for most programmers/system developer. But it works
hth,
foxyland
modified 8-Mar-12 4:13am.
|
|
|
|
|
Hello
I did a stock control app in vb.net that uses an Access 2007 db.
I connect using ACE.OLEDB.12.0
The Access db is located on the server, and the app is loaded on the workstations.
They make a backup of the Access file every night. (Simply copy and paste to another location)
There are about 10 users, reading and writing the same tables
They have been running for a week so far, and up to now have already had one error where when trying to access data I got an exception " Unable to access the database"
From comments on the net, Access is a really bad idea (Corruption, slow connections in multi user apps etc etc)
They are a small company, so a licensed SQL2008 server is not practical financially.
MySQL seems like a logical free option?
My Questions:
1) As a temporary fix while migrating to a 'proper' db, I have read about creating a second access db, and linking all the tables in the 'back-end' db, to the 'front-end' db, so that if the first db gets corrupted, the 'back-end' db will remain in tact. Does this seem like a good idea?
2) Is MySQL the best choice for a db in this situation?
3) How easy it it to install and configure a MySQL database
4) Are there connection objects in .Net for MySQL (I saw an ODBC connector on the MySQL download page - is this the best way to connect?)
5) How do you back up a MySQL Db - do you need special agents?
6) Are there any other problems/differences I should be aware of when going the MySQL route? (I have noticed some differences in SQL syntax, parametrized queries etc, and also I use AutoNumber fields in Access to get unique Sales/Purchase Order Numbers
I would be appreciative of any comments!
Thanks in advance
Richard
modified 26-Feb-12 1:20am.
|
|
|
|
|
Have you given any thoughts on using SQL Server Express?
It can do everything Access can do better, integrates very well with .net, gives you powerful upgrade options and is for free. (until you upgrade that is)
|
|
|
|
|
1) This is valid when you have both the UI and the data in Access, your UI is in VB.net so this scenario does not apply.
2) No - MySql has less support than SQL Server and Oracle (and there are other good options out there). I would recommend SQL Server Express (relevant limitation is 4gb database size) as there are a MUCH wider resource base for support and training. There is a valid upgrade path though you then need to pay. And there are many more human resources (potential employees) with SQL Server skills.
3) Very easy, the gotchas are minimal, case sensitivity (none) and collation are the one I hate, just accept the default.
4) Connection strings
5) No idea but there will be a MySQL support site dedicated to this type of question (I just don't know it)
6) SQL server used to provide an upgrade wizard for Access to SQL Server, not sure it MySQL does the same.
As you may infer I am not a fan of MySQL (also no experience) and would urge you to use SQL Server.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
2) No. I use MySQL only when I have to. For small configurations like yours I recommend SQL Server Express 2008R2; it works very well, much better than Access, and has excellent support in .NET
|
|
|
|
|
2) No. SQL Server Express.
4) An ADO.net connector (not ODBC) for MySQL is downloadable; I use it to read MySQL databases created by others when I really need to.
|
|
|
|
|
Just like the rest, Use SQL Server Express 2008 R2 with management tools.
It has a 10GB limit per database. Other then taht it sure beats the hell out of Access.
If you really wanna go for open source then I reccomend PostGreSql over MySQL.
Still, I highly recommend SQL Server Express 2008R2 for the simple fact that your app is in .net.
All the best,
Dan
|
|
|
|
|
Long time since I used Access so this may not apply. When backups were done we always did a compact of the database (it's a menu choice IIRC). In a similar situation to what you describe never had any real problems with Access in 5 years or so. The front end was VB6 and the back end was just tables and queries on the server. So may be something in your set up you can tweak.
However, I'd support the suggestion of looking at SQL Server Express. I've had a little experience with MySQL (on Linux) and found no problems with it. So I would not rule it out completely. I think however that SSE is probably a better bet. There are others like SQLite around(though I don't think that's appropriate for what you describe) so may be worth investigating.
Regards
David R
---------------------------------------------------------------
"Every program eventually becomes rococo, and then rubble." - Alan Perlis
The only valid measurement of code quality: WTFs/minute.
|
|
|
|
|
|
craig trombly @ infobright wrote: How do you keep the database tuned for queries when you do not know what the query will be?
You don't tune for a query; indexes aren't linked to queries, but to fields in a table (or a view). Put indexes on the fields that are used for searching and filtering. You can even ask Sql Server which indexes are used most frequently and which aren't, so it could automagically fine-tune itself based on recent usage (the server keeps statistics, so it "knows").
craig trombly @ infobright wrote: but when the CFO wants to look at common trends against the corporate database for spending with daily, weekly and monthly increments for sales engineers in the northeast regions......
I don't see a problem there.
craig trombly @ infobright wrote: Take a look into why an analytic database is becoming a vital concept into the business intelligence communities at http://www.infobright.org
Ah, so this is an ad?
If you want to introduce something new to the community, then please submit an article; the forum is more fore live discussions, and lots of new posts will move this one to the back, and people will forget about it.
Bastard Programmer from Hell
|
|
|
|
|
Eddy, Thank you for letting me know. This is not an ad though. What I am trying to do is to get people talking about analytic databases and to encourage the use and understanding of them. I do appreciate the feedback from you.
|
|
|
|
|
craig trombly @ infobright wrote: What I am trying to do is to get people talking about analytic databases and to encourage the use and understanding of them
Then why did you post
craig trombly wrote: Take a look into why an analytic database is becoming a vital concept into the business intelligence communities at http://www.infobright.org
Not cool.
Chris Maunder
The Code Project | Co-founder
Microsoft C++ MVP
|
|
|
|
|
Hey Chris, the original post has been removed, but it seems you inadvertently posted the link again.
|
|
|
|
|
Reported: sounds like an ad and looks like an ad and if the page could speak it would probably quack.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
nils illegitimus carborundum
me, me, me
|
|
|
|
|
Wat
All the best,
Dan
|
|
|
|
|
There is a badly designed Caché database I have to grab data from via ODBC. Rather than updating a table, they decided to just append the new records. (Won't they be surprised, eventually. :p)
Anyway, is there a simple-ish way to get just the latest row for each set of records? Some sort of cool groupy join or something?
"ID" is an autoincrememnt int
Col2 is a varchar that identifies the set of records
Col3 is the data I'm ultimately interested in.
(There are other columns, of course, but probably irrelevant for this question...I hope)
Looks sorta like this:
ID Col2 Col3
1 A 23
2 A 23
3 A 23
4 A 24
5 B 17
6 B 18
7 B 19
8 C 01
9 C 01
What I want to end up with is
4 A 24
7 B 19
9 C 01
|
|
|
|
|
This view might work
create or replace view grouped_table as
select col2
,max(ID)
from table
group by col2
;
Then you query the table and join on the view
select t.ID
,t.col2
,t.col3
from table t
,grouped_table gt
where t.id = gt.id
This is just off the top of my head, though.
Chris Meech
I am Canadian. [heard in a local bar]
In theory there is no difference between theory and practice. In practice there is. [Yogi Berra]
posting about Crystal Reports here is like discussing gay marriage on a catholic church’s website.[Nishant Sivakumar]
|
|
|
|
|
Ah...
Probably should I said I don't own the database, can't create anything on it, etc. Not even a temporary table/view.
|
|
|
|
|
In that case try this then
select t.ID
,t.col2
,t.col3
from table t
,( select col2
,max(ID) id
from table
group by col2
) gt
where t.id = gt.id
Again, off the top of my head.
Chris Meech
I am Canadian. [heard in a local bar]
In theory there is no difference between theory and practice. In practice there is. [Yogi Berra]
posting about Crystal Reports here is like discussing gay marriage on a catholic church’s website.[Nishant Sivakumar]
|
|
|
|
|
Looks like it's working. Thanks.
Won't know for sure, though, until it finishes...if it does.
Started it a little over three hours ago and it began delivering the goods about two hours after that.
Oh...did I forget to mention this $%^$@#! table has over 177 million rows in it? And I expect the result to be in the neighborhood of 3 million?
Hopefully, it will finish before I go home this afternoon. Otherwise, pfft! "Night Watchman" will come along at 3 a.m. and shut off my computer and I'll have to start all over again in the morning.
EDIT:
It finished. And, as soon as I clicked on the result to try to copy it, BOOM! 'So sorry. We've run out of virtual memory and have to close. Hope you weren't working on anything too terribly important'.
Or words to that effect.
|
|
|
|
|
Using this table:
create table tt (
id int,
Col2 varchar(1),
Col3 Int)
This seems to work for me ...
select * from TT where id in (
select max(id) from tt group by col2)
|
|
|
|
|
If you are doing an inner join does it matter which order the tables are in?
Is
Table1 inner join Table2
any different from
Table2 inner join Table1
Could there be any difference?
Brent
|
|
|
|