Oliver Fromme check+jw4u8u00rs4qnc33 at
Tue Feb 12 08:07:35 PST 2008

Dave Hayes wrote:
 > Oliver Fromme writes:
 > > I don't see any advantage.
 > >
 > > And there's clearly a disadvantage:  People will increase
 > > the limit to "fix" their broken scripts and Makefiles.
 > > Or even worse, they will write scripts that are broken
 > > from the start, and they won't even notice.
 > One can also argue that the mechanism is broken becuase it doesn't
 > dynamically allocate enough memory to handle the result of an argument
 > expansion in these days where 64KB is not a lot of memory.

So how do you propose to fix it?

It cannot be fixed in the shell.  In fact, the shell
(at least /bin/sh) has no such limit.  It can process
expansions of arbitrary length, subject to the usual

The limit is in the kernel's execve(2) syscall.  The
function copies the argument vector and environment
to a temporary buffer that is allocated from kernel
memory (kmem).  If there wasn't a limit, a simple
shell command (e.g. "/bin/echo `cat largefile`") would
exhaust kmem and crash the machine with a panic.

 > > In fact it wouldn't be a bad idea to _lower_ the limit,
 > > so people become aware of the bugs and have an incentive
 > > to really fix their scripts.
 >  ...   
 > > PS:  No, that last suggestion wasn't meant to be serious.
 > But it illustrates why the limit will never change. Innovation? We don't
 > need it! There will always be limits! ;)

Sometimes limits are good to prevent bad things from

Best regards

Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing b. M.
Handelsregister: Registergericht Muenchen, HRA 74606,  Geschäftsfuehrung:
secnetix Verwaltungsgesellsch. mbH, Handelsregister: Registergericht Mün-
chen, HRB 125758,  Geschäftsführer: Maik Bachmann, Olaf Erb, Ralf Gebhart

FreeBSD-Dienstleistungen, -Produkte und mehr:

More information about the Kernel mailing list