[ACCEPTED]-SQL Optimization: how many columns on a table?-database-design
Generally it's better to design your tables 14 first to model the data requirements and 13 to satisfy rules of normalization. Then worry 12 about optimizations like how many pages 11 it takes to store a row, etc.
I agree with 10 other posters here that the large number 9 of columns is a potential red flag that your table 8 is not properly normalized. But it might 7 be fine in this case. We can't tell from 6 your description.
In any case, splitting 5 the table up just because the large number 4 of columns makes you uneasy is not the right 3 remedy. Is this really causing any defects 2 or performance bottleneck? You need to 1 measure to be sure, not suppose.
A good rule of thumb that I've found is 17 simply whether or not a table is growing 16 rows as a project continues,
On 15 a project I'm working on, the original designers 14 decided to include site permissions as columns 13 in the user table.
So now, we are constantly 12 adding more columns as new features are 11 implemented on the site. obviously this 10 is not optimal. A better solution would 9 be to have a table containing permissions 8 and a join table between users and permissions 7 to assign them.
However, for other more archival 6 information, or tables that simply don't 5 have to grow or need to be cached/minimize 4 pages/can be filtered effectively, having 3 a large table doesn't hurt too much as long 2 as it doesn't hamper maintenance of the 1 project.
At least that is my opinion.
Usually excess columns points to improper 2 normalization, but it is hard to judge without 1 having some more details about your requirements.
I can picture times when it might be necessary 6 to have this many, or more columns. Examples 5 would be if you had to denormalize and cache 4 data - or for a type of row with many attributes. I 3 think the keys are to avoid select * and 2 make sure you are indexing the right columns 1 and composites.
If you had an object detailing the data 23 in the database, would you have a single 22 object with 120 fields, or would you be 21 looking through the data to extract data 20 that is logically distinguishable? You can 19 inline Address data with Customer data, but 18 it makes sense to remove it and put it into 17 an Addresses table, even if it keeps a 1:1 16 mapping with the Person.
Down the line you 15 might need to have a record of their previous 14 address, and by splitting it out you've 13 removed one major problem refactoring your 12 system.
Are any of the fields duplicated 11 over multiple rows? I.e., are the customer's 10 details replicated, one per invoice? In 9 which case there should be one customer 8 entry in the Customers table, and n entries 7 in the Invoices table.
One place where you 6 need to not fix broken normalisation is 5 where you have a facts table (for auditing, etc) where 4 the purpose is to aggregate data to run 3 analyses on. These tables are usually populated 2 from the properly normalised tables however 1 (overnight for example).
It sounds like you have potential normalization 4 issues.
If you really want to, you can create 3 a new table for each of those columns (a 2 little extreme) or group of related columns, and 1 join it on the ID of each record.
It could certainly affect performance if 2 people are running around with a lot of 1 "Select * from GiantTableWithManyColumns"...
Here are the official statistics for SQL 11 Server 2005 http://msdn.microsoft.com/en-us/library/ms143432.aspx
Keep in mind these are the maximums, and 10 are not necessarily the best for usability.
Think 9 about splitting the 126 columns into sections. For 8 instance, if it is some sort of "person" table you 7 could have
Person ID, AddressNum, AddressSt, AptNo, Province, Country, PostalCode, Telephone, CellPhone, Fax
But 6 you could separate that into Person ID, AddressID, PhoneID
Address ID, AddressNum, AddressSt, AptNo, Province, Country, PostalCode
Phone ID, Telephone, Cellphone, fax
In 5 the second one, you could also save yourself 4 from data replication by having all the 3 people with the same address have the same 2 addressId instead of copying the same text 1 over and over.
The UserData table in SharePoint has 201 fields 6 but is designed for a special purpose.
Normal 5 tables should not be this wide in my opinion.
You 4 could probably normalize some more. And 3 read some posts on the web about table optimization.
It 2 is hard to say without knowing a little 1 bit more.
Well, I don't know how many columns are 13 possible in sql but one thing for which 12 I am very sure is that when you design table, each 11 table is an entity means that each table 10 should contain information either about 9 a person, a place, an event or an object. So 8 till in my life I don't know that a thing 7 may have that much data/information.
Second 6 thing that you should notice is that that 5 there is a method called normalization which 4 is basically used to divide data/information 3 into sub section so that one can easily 2 maintain database. I think this will clear 1 your idea.
I'm in a similar position. Yes, there truly 12 is a situation where a normalized table 11 has, like in my case, about 90, columns: a 10 work flow application that tracks many states 9 that a case can have in addition to variable 8 attributes to each state. So as each case 7 (represented by the record) progresses, eventually 6 all columns are filled in for that case. Now 5 in my situation there are 3 logical groupings 4 (15 cols + 10 cols + 65 cols). So do I keep 3 it in one table (index is CaseID), or do 2 I split into 3 tables connected by one-to-one 1 relationship?
Columns in a table1 (merge publication) 246
Columns 5 in a table2 (SQL Server snapshot or transactional 4 publication) 1,000
Columns in a table2 (Oracle 3 snapshot or transactional publication) 995
in 2 a table, we can have maximum 246 column 1
More Related questions