From: Roland McGrath After my cleanup of the rusage semantics was so quickly taken in by Andrew and Linus without comment, I wonder if I should not have tried to be so accommodating of potential objections as I was. :-) In my original posting, I solicited comment on whether introducing RUSAGE_GROUP as distinct from RUSAGE_SELF was warranted. Note that we've now changed the behavior of the times system call when using CLONE_THREAD, so changing getrusage RUSAGE_SELF to match would be consistent. I think that changing the meaning of the old RUSAGE_SELF value is preferable to introducing the new value for the proper POSIX getrusage behavior. This patch against Linus's current tree dumps RUSAGE_GROUP and makes RUSAGE_SELF have the fixed behavior. If there is interest in having a new explicit interface to sample a single thread's stats alone, then I think that would be better done by introducing a new value for RUSAGE_THREAD. This is trivial to implement, but I won't offer patches bloating the interface if noone is actually interested in using it. Signed-off-by: Andrew Morton --- 25-akpm/include/linux/resource.h | 1 - 25-akpm/kernel/sys.c | 15 +++------------ 2 files changed, 3 insertions(+), 13 deletions(-) diff -puN include/linux/resource.h~nix-rusage_group include/linux/resource.h --- 25/include/linux/resource.h~nix-rusage_group 2004-09-02 21:04:54.644015152 -0700 +++ 25-akpm/include/linux/resource.h 2004-09-02 21:04:54.649014392 -0700 @@ -17,7 +17,6 @@ #define RUSAGE_SELF 0 #define RUSAGE_CHILDREN (-1) #define RUSAGE_BOTH (-2) /* sys_wait4() uses this */ -#define RUSAGE_GROUP (-3) /* thread group sum + dead threads */ struct rusage { struct timeval ru_utime; /* user time used */ diff -puN kernel/sys.c~nix-rusage_group kernel/sys.c --- 25/kernel/sys.c~nix-rusage_group 2004-09-02 21:04:54.646014848 -0700 +++ 25-akpm/kernel/sys.c 2004-09-02 21:04:54.651014088 -0700 @@ -1582,7 +1582,7 @@ asmlinkage long sys_setrlimit(unsigned i * This expects to be called with tasklist_lock read-locked or better, * and the siglock not locked. It may momentarily take the siglock. * - * When sampling multiple threads for RUSAGE_GROUP, under SMP we might have + * When sampling multiple threads for RUSAGE_SELF, under SMP we might have * races with threads incrementing their own counters. But since word * reads are atomic, we either get new values or old values and we don't * care which for the sums. We always take the siglock to protect reading @@ -1603,14 +1603,6 @@ void k_getrusage(struct task_struct *p, return; switch (who) { - case RUSAGE_SELF: - jiffies_to_timeval(p->utime, &r->ru_utime); - jiffies_to_timeval(p->stime, &r->ru_stime); - r->ru_nvcsw = p->nvcsw; - r->ru_nivcsw = p->nivcsw; - r->ru_minflt = p->min_flt; - r->ru_majflt = p->maj_flt; - break; case RUSAGE_CHILDREN: spin_lock_irqsave(&p->sighand->siglock, flags); utime = p->signal->cutime; @@ -1623,7 +1615,7 @@ void k_getrusage(struct task_struct *p, jiffies_to_timeval(utime, &r->ru_utime); jiffies_to_timeval(stime, &r->ru_stime); break; - case RUSAGE_GROUP: + case RUSAGE_SELF: spin_lock_irqsave(&p->sighand->siglock, flags); utime = stime = 0; goto sum_group; @@ -1672,8 +1664,7 @@ int getrusage(struct task_struct *p, int asmlinkage long sys_getrusage(int who, struct rusage __user *ru) { - if (who != RUSAGE_SELF && who != RUSAGE_CHILDREN - && who != RUSAGE_GROUP) + if (who != RUSAGE_SELF && who != RUSAGE_CHILDREN) return -EINVAL; return getrusage(current, who, ru); } _