|
Umm what is your expectation here, you have described a few tables and told us what you want to do but have not defined or asked a question.
We can't and won't build your app for you, you need to do that and when you have a specific problem with your code then we may be able to help.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
I was able to achieve that without the date time function.I made it as simple as i could do. But I am having a problem on deployment (publishing)of the program, to beable to run on any system, other than the one I developed with. when I tried to publish,it did publish, but if I try to access the forms that contains a link to the database, I keep getting error, "Microsoft ...... 12.0 or 4.0 (when I saved in 2002 - 2003 format), not installed on local machine. I want to send you the folder that contains the project, but can't, as I cant see where to attach files here.
|
|
|
|
|
What is the better way of indexing the foreign key?
Create Table table3(
t3_id int not null auto_increment,
t1_id int not null,
t2_id int not null,
primary key (t3_id),
index IX_index (t1_id, t2_id), // this is my concern
constraint FK_t1 foreign key (t1_id)
reference table1(t1_id),
constraint FK_t2 foreign key (t2_id)
reference table2(t2_id));
or
Create Table table3(
t3_id int not null auto_increment,
t1_id int not null,
t2_id int not null,
primary key (t3_id),
index IX_t1 (t1_id), //this is my concern
index IX_t2 (t2_id), //and this
constraint FK_t1 foreign key (t1_id)
reference table1(t1_id),
constraint FK_t2 foreign key (t2_id)
reference table2(t2_id));
This is for innodb tables, I dont have a broad understanding how the btree works. But as far as I know, the first table will save the indexes on single area while the second is not (correct me if I am wrong). And if it is what is the pros and cons if I will put it on a single index or multiple index?
Thanks.
modified 10-Feb-17 6:41am.
|
|
|
|
|
They are different ways and which is 'best' depends on your needs.
The first creates a composite index, and puts two fields in that list. The second one creates two separate indexes for those fields.
Are you going to be locating a lot of records by searching for t1_id, and when found, looking in that set for t2_id? Or are you going to be locating a lot of records by searching for either field?
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
For now its just a concept table, for me to understand when to use the first or the second one.
can you explain to me when should I use the composite index or the other one? please. Thanks
base on your reply your saying that, if I use the t1_id and t2_id to satisfy a where clause frequently, I should use the composite key other wise I should use the separate index for every field. Am I right?
How about joining the three tables?
Thanks.
|
|
|
|
|
Think of it like a phone book: the data has a composite key of (Surname, Forename) .
If you're always going to know the surname of the records you're looking for, then the index works.
But if you ever want to search for a particular forename, without knowing the surname, then the index is no help. You have to resort to reading through the entire book to find the matching records.
If you wanted to do that regularly, you'd create a separate index sorted by forename. That would make it easier to find records with a particular forename. But it would also mean you'd have more work to do when you insert, update, or delete records.
Which option you choose will depend on your data, how you're going to be querying it, and how often you're going to be changing it. For example, if you're only going to be using t1 as a lookup, and never searching for records with a particular t1_id , then there's probably not much point having an index on it.
It basically comes down to a combination of instinct and performance measurement.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Thanks appreciated .
|
|
|
|
|
This Foxpro program keeps using spaces on the invoice number column in front or the data. Perhaps that was the norm back then before I started programming
E.G.
"70155" would be " 70155"
So I not sure if the column is fixed, or if every invoice number starts with a " " blank space.
e.g. " 155"
I wrote this placing a space in front of the invoice number, but afraid it may backfire on me when the invoice number grows larger
Dim c5 As New System.Data.OleDb.OleDbCommand("UPDATE ARTRS01H.dbf SET FPRICE = 0.00 WHERE FINVNO = ' " & pFINVNO & "' AND FITEMNO = '" & pFITEMNO & "'", connection)
I thought using a wildcard, but I've been searching for a couple of hours and cant get things like $,% to work
If it ain't broke don't fix it
|
|
|
|
|
This works at the moment, I'll run with it today
Dim c3 As New System.Data.OleDb.OleDbCommand("UPDATE ARTRS01.dbf SET FPRICE = " & pFPRICE & ", FAMOUNT = " & pFPRICE & " WHERE FINVNO LIKE '%" & pFINVNO & "%' AND FITEMNO = '" & pFITEMNO & "'", connection)
If it ain't broke don't fix it
|
|
|
|
|
Pad left (with blanks) all compared fields to their "defined lengths" (i.e. what's defined in the dbf), and then compare those.
That's your safest route (IMO).
"(I) am amazed to see myself here rather than there ... now rather than then".
― Blaise Pascal
|
|
|
|
|
I have a width of 74 for that column. I'm not sure what the units are.
I'll google it today.
If it ain't broke don't fix it
|
|
|
|
|
What do you mean by "Google" it?
The definition of the field is in the DBF; you can even determine that with code at run-time to make it generic.
"(I) am amazed to see myself here rather than there ... now rather than then".
― Blaise Pascal
|
|
|
|
|
I think the width was 74, just not sure 74 of what unit.
If it ain't broke don't fix it
|
|
|
|
|
Your code is vulnerable to SQL Injection[^]. NEVER use string concatenation to build a SQL query. ALWAYS use a parameterized query.
Everything you wanted to know about SQL injection (but were afraid to ask) | Troy Hunt[^]
How can I explain SQL injection without technical jargon? | Information Security Stack Exchange[^]
Query Parameterization Cheat Sheet | OWASP[^]
Using c3 As New System.Data.OleDb.OleDbCommand("UPDATE ARTRS01.dbf SET FPRICE = @FPRICE, FAMOUNT = @FAMOUNT WHERE FINVNO LIKE '%' + @FINVNO + '%' AND FITEMNO = @FITEMNO", connection)
c3.Parameters.AddWithValue("@FPRICE", pFPRICE)
c3.Parameters.AddWithValue("@FAMOUNT", pFPRICE)
c3.Parameters.AddWithValue("@FINVNO", pFINVNO)
c3.Parameters.AddWithValue("@FITEMNO", pFITEMNO)
...
End Using
Also, be very wary of your LIKE clause. If your pFINVNO is, for example, "1" , your query will update every record where the FINVNO column contains 1 , including '10' , '21' , '123456' , etc.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
It started out using parameters, and it's a Windows App.
But I wasn't able to match the parameters and started easter egg hunting to figure out why. Back to the case sensitive stuff and that extra blank char prefix.
So I wrote it the old school way to diagnose it.
I'll try and go back to parameters on that, but will have to use the ? in order for that.
It's suppose to be more FoxPro than SQL, like speaking pure FoxPro using VFPOLEDB
If it ain't broke don't fix it
|
|
|
|
|
I'm going to create a rather large report that collects the data from two different sources.
One is a badly normalized database, where the data I need is spread over a large number of columns in many tables.
The other is an Entity–attribute–value model[^].
I don't have any problems in fetching the data from either source, but the data is going to be organised in categories and subcategories, or possibly a tree, that are currently not existing in the EAV model and not corresponding at all with the data model on the other database, so they need to be created from scratch.
I'm looking for ideas on how to make this in a fairly futureproof way, I don't really have any keys to use either. It's just the column names in one source and the attribute names in the other.
I realize there is no right way of doing this (but many wrong).
So any opinions on how to do this, and as important, how not to do this, is appreciated.
And no, I can't do much about the structure of the sources, that's way out of scope.
|
|
|
|
|
Instant answer would be a mapping table/application. Probably not what you want to hear
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
That's what I'm building at the moment.
But you're quite right, it's not what I wanted to hear.
To many error sources, and yet another place to add the same texts again.
I suspect that this is one of those cases where garbage in becomes garbage out.
|
|
|
|
|
There's a number of ways to approach this.
Sounds like you're at the "exploratory" stage.
In that case, I might suggest MS Power BI for the Desktop.
Power BI Desktop | Microsoft Power BI
You can use Power BI to pull data from multiple sources, build relationships, and get a "feel for the data" and perhaps produce your report in question (with or without using any number of "cleansing" techniques).
Moving forward, look into (data warehouse) "star schemas" for modeling ideas.
"(I) am amazed to see myself here rather than there ... now rather than then".
― Blaise Pascal
|
|
|
|
|
Jörgen Andersson wrote: I'm looking for ideas on how to make this in a fairly futureproof way, I don't really have any keys to use either. It's just the column names in one source and the attribute names in the other.I realize there is no right way of doing this (but many wrong).So any opinions on how to do this You got two groups of unsorted and ungrouped column-names coming from a repository that need to be mapped to a model?
..then I'd go for a relational model, even if it only exists in memory. Which would mean following the steps to normalize the data (upto BCNF). That's a tedious but proven way. Any group of properties that make up a model (regardless of the way the data is physically stored) would be a candidate.
I would seriously consider creating that theoretical model and then create an in-memory database (using SQLite or similar) to hold it. The added advantage of that approach is having an abstraction of the data that you can already use to design reporting and stuff. Another added advantage is that any redundancies or inconsistencies in the database would be caught. Downside is that you need (access to) a domain-expert to build and validate the model.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Yes, this would probably be the right way of doing it.
|
|
|
|
|
Jörgen Andersson wrote: Yes, this would probably be the right way of doing it. Not the most exciting thing to do, I agree
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Using Linq-To-SQL. I have this update method. I'm getting a "Specified cast is not valid exception". The line number of the exception is the query line in the inner Try/Catch.
public void UpdateDashboardInfo(AssayDashboardInfoEntity entity)
{
try
{
using (var context = new AssayDashboardDataContext())
{
AssayDashboardInfo model = null;
try
{
model = (from adi
in context.AssayDashboardInfos
where adi.ResultId.CompareTo(entity.ResultId) > 0
select adi).FirstOrDefault();
}
catch (Exception e1)
{
_logger.Error("****** ARD_DAL: UpdateDashboardInfo (Query)", e1);
}
if (model != null)
{
model.SiteId = entity.SiteId;
model.InstrumentId = entity.InstrumentId;
model.TowerLocation = entity.TowerLocation;
context.SubmitChanges();
}
}
}
catch (Exception e)
{
_logger.Error("********** ARD_DAL: UpdateDashboardInfo", e);
}
}
"adi.ResultId" is a DB Guid. "entity.ResultId" is a .Net Guid in my entity class.
Anyone see what's wrong with this?
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|
It looks like you're trying to update the entity with a matching ID. But your query is updating the first entity whose ID is greater than the ID of the entity you've passed in.
Does it even make sense to ask if one GUID is greater than another?
I suspect your query should be:
model = (from adi
in context.AssayDashboardInfos
where adi.ResultId == entity.ResultId
select adi).FirstOrDefault();
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
ya, should be "string.Equals()"
But I was getting the cast exception BEFORE I added the Compare. I'm guessing that there's some problem comparing a SQL GUID against the .Net GUID
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|