According to this forum discussion, SQL Server (I'm using 2005 but I gather this also applies to 2000 and 2008) silently truncates any
varchars you specify as stored procedure parameters to the length of the varchar, even if inserting that string directly using an
INSERT would actually cause an error. eg. If I create this table:
CREATE TABLE testTable( [testStringField] [nvarchar](5) NOT NULL )
then when I execute the following:
INSERT INTO testTable(testStringField) VALUES(N'string which is too long')
I get an error:
String or binary data would be truncated. The statement has been terminated.
Great. Data integrity preserved, and the caller knows about it. Now let's define a stored procedure to insert that:
CREATE PROCEDURE spTestTableInsert @testStringField [nvarchar](5) AS INSERT INTO testTable(testStringField) VALUES(@testStringField) GO
and execute it:
EXEC spTestTableInsert @testStringField = N'string which is too long'
No errors, 1 row affected. A row is inserted into the table, with
testStringField as 'strin'. SQL Server silently truncated the stored procedure's
Now, this behaviour might be convenient at times but I gather there is NO WAY to turn it off. This is extremely annoying, as I want the thing to error if I pass too long a string to the stored procedure. There seem to be 2 ways to deal with this.
First, declare the stored proc's
@testStringField parameter as size 6, and check whether its length is over 5. This seems like a bit of a hack and involves irritating amounts of boilerplate code.
Second, just declare ALL stored procedure varchar parameters to be
varchar(max), and then let the
INSERT statement within the stored procedure fail.
The latter seems to work fine, so my question is: is it a good idea to use
varchar(max) ALWAYS for strings in SQL Server stored procedures, if I actually want the stored proc to fail when too long a string is passed? Could it even be best practice? The silent truncation that can't be disabled seems stupid to me.
It just is.
I've never noticed a problem though because one of my checks would be to ensure my parameters match my table column lengths. In the client code too. Personally, I'd expect SQL to never see data that is too long. If I did see truncated data, it'd be bleeding obvious what caused it.
If you do feel the need for varchar(max) beware a massive performance issue because of datatype precedence. varchar(max) has higher precedence than varchar(n) (longest is highest). So in this type of query you'll get a scan not a seek and every varchar(100) value is CAST to varchar(max)
UPDATE ...WHERE varchar100column = @varcharmaxvalue
One could always raise a Connect issue for MS. At least they may explain this behaviour.
And it's probably worthy of inclusion in Erland Sommarkog's Strict settings
Edit 2, after Martins comment
DECLARE @sql VARCHAR(MAX), @nsql nVARCHAR(MAX) SELECT @sql = 'B', @nsql = 'B'; select LEN(@sql), LEN(@nsql), DATALENGTH(@sql), DATALENGTH(@nsql) declare @t table(c varchar(8000)) insert into @t values (replicate('A', 7500)) select LEN(c) from @t select LEN(@sql + c), LEN(@nsql + c), DATALENGTH(@sql + c), DATALENGTH(@nsql + c) from @t