|
It is free for commercial use;
SQL Server Compact 4.0 is freely redistributable under a redistribution license agreement and application developers redistributing SQL Server Compact 4.0 can optionally register at the SQL Server Compact redistribution site. Registering will help the developers in getting information about SQL Server Compact critical security patches and hot fixes that can be further applied to the client installations.
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
Display the last name of all employees who have an 'i' and an 'n' in their last name.
IS THIS STILL AN SQL WILDCARDS WITH A COMBINATION OF AN AND? SIGH..PLS. HELP..
|
|
|
|
|
|
|
NO
However I could help you when you show what you tried and explained how the results differ from what you want. It isn't my homework.
|
|
|
|
|
EjojAnrodac wrote: PLS. HELP..
Steps
1. Learn basics of SQL
2. Learn how AND works in WHERE clauses
3. Learn how wildcards work in LIKE expressions.
4. Use 1-3 to determine the answer.
|
|
|
|
|
Which SQL language are you using? SQL, T-SQL, PL-SQL?
|
|
|
|
|
Let me put you put of your misery: SQL Wildcards[^].
Note the article is very general: you will still have to work out how to apply the information therein.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
nils illegitimus carborundum
me, me, me
|
|
|
|
|
I want to get data of a table in to an Excel direct from a SQL Query.
Can anyone help?
- Happy Coding -
Vishal Vashishta
|
|
|
|
|
|
....
modified 12-Jul-12 4:32am.
|
|
|
|
|
Why!
What a horrendous design, and it comes back to why would someone saddle themselves with such a nightmare!
You need to do as the error suggested, add a notifyicon to your service that will live in the tray, this is the path to a UI, services do not have a UI!
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
If this structure is now suitable please suggest me something different. I need to fire a alarm when the data row inserted to the particular sql table.
|
|
|
|
|
Where is the alarm going to be consumed, on the server!
Why not have a trigger spit send an email via sql mail or write an app that send a text message. Most orgs either lock down the table so only authorised people have access or use email alerts.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
This is not a question!
I got caught by the parameter sniffing[^] issue the other day, it was a classic, I had built a proc that takes in 2 parameters, applies then against a bunch of tables and inserts the results into a table variable to return a summary of the current periods processing status.
It worked perfectly during development, ran like a dream (<10sec) during the first round of UAT. Then it spat the dummy and I was getting a timeout from the service, the procedure ran perfectly from SSMS but from the client it just refused to returned a result.
3 hours later having exhausted all probabilities the little bulb clicked, USE LOCAL VARIABLES. I was applying the parameters multiple times to different tables, by changing to a pair of local variables this fixed the problem, classic case of parameter sniffing.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Hi,
I am having an issue with the selection list, which does not allow a selection of more than 1000 rows.
Like if I am trying to select more than 1000 rows, maybe the customer numbers, it does not allow more than 1000 rows. It comes back with this error message,
SQL Error: ORA-01795: maximum number of expressions in a list is 1000 .
I tried increasing the SQL array fetch size, and also increase the fetch size on the property window of SQL Developer but with no luck.
any help is much appreciate.
I need to update 4 million rows.
Thanks!!!
|
|
|
|
|
I don't think it is related to the row count.
Google the error message and you will find numerous descriptions and workarounds.
|
|
|
|
|
I bet you are building your query like this ...
select xx from myTable where ID in (val1,val2,val3)
The problem is that Oracle has a limit of 1000 values in their "in" clause. For example you can not have val1,val2,...val1001, Oracle will throw an error.
You need to restructure your "where" clause.
A possible solution would be to create a temp table and insert those selected values into the table, then do a join to the temp table.
Somthing just doesn't sound right if you are selecting 4 million rows, but you haven't give much information on the problem. Maybe more info will lead to a better solution.
Good luck.
|
|
|
|
|
Not selecting but updating a value on 4 million rows.
Anyways, it is a limitation on the sql "in" clause select. You cannot have more than 1000 lines in a sql select.
I will store the data in temp table and join to it.
|
|
|
|
|
My bad for not seeing that you are updating, not selecting. Oops.
Good to know that you have a solution.
|
|
|
|
|
I have a single table called customers that stores all customers information. It has only one primary id field, which is identity. At times it happens duplicate records are inserted. But since the primary ID is the different, the records are actually different. There is no other field in the table that I can make unique.
What is the best way to introduce data integrity in the database so that duplicate records can not be inserted in the table. Note that table has more than 80 fields.
|
|
|
|
|
If you cannot make any other single column unique, then you need to make a combination of columns unique. For instance, you could choose to reject entries that have the same name that are at the same address. The problem with this approach is that you will have to decide how close the selection would be i.e would johnson PLC,e 123 Albert st be the same as Johnson Private Limited Company, 123 albert street . That is where your requirements and business process will need to come into play.
Hope this helps
When I was a coder, we worked on algorithms. Today, we memorize APIs for countless libraries — those libraries have the algorithms - Eric Allman
|
|
|
|
|
Well this would be interesting if I can do what you mention. I can use it for other purposes and will be helpful to me.
But in my situation, I can not make a combination of columns unique. Things are like
1. Customer are renewed so the data can be very similar to each other.
2. Some honest mistakes can occur where the same data is added twice. The user can figure it out but I want to stop it from the db end.
I am thinking I should change an existing field unique or add a new field which would be unique.
|
|
|
|
|
sharp_k wrote: But in my situation, I can not make a combination of columns unique.
The table represents a "customer". There are going to be columns in there, not all 80, which defines what a unique "customer" is.
You CANNOT procede until you determine which columns make it unique.
If there a few columns then you can add a uniqueness constraint.
But lets say you have a 'lot' of columns, like 50 columns, then you are probably out of luck for easy solutions because it is unlikely that you can add a uniqueness constraint for that many columns.
In that case you would need to wrap ALL access to table in a proc. The proc would verify, via a query, that no other record existed with those 50 columns before the insert. Views can often help with this.
You also need to consider exactly how those records get added. Because now the system is going to start producing errors where it didn't produce errors before.
Additionally if you have a 'lot' of columns which make it unique then for something called a "customer" I would think that there is a design problem.
|
|
|
|
|
Like others have said, you need to normalize your database.
When a customer is renewed you don't get a new customer and should therefore not get a new entry in the customer table, but rather a new entry in the subscription table or customer history table or whatever table makes most sense to your system.
As we don't know much about you database we can only give you generalized advice. Mine would be to read this[^] article.
It describes in an easy to understand way how to, and why, you normalize your database.
|
|
|
|