- Sep 14, 2011
-
-
Yong Zhang authored
There are still some leftovers of commit f59ca058 [locking, lib/atomic64: Annotate atomic64_lock::lock as raw] [ tglx: Seems I picked the wrong version of that patch :( ] Signed-off-by:
Yong Zhang <yong.zhang0@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Shan Hai <haishan.bai@gmail.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Link: http://lkml.kernel.org/r/20110914074924.GA16096@zhy Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- Sep 13, 2011
-
-
Shan Hai authored
The spinlock protected atomic64 operations must be irq safe as they are used in hard interrupt context and cannot be preempted on -rt: NIP [c068b218] rt_spin_lock_slowlock+0x78/0x3a8 LR [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8 Call Trace: [eb459b90] [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8 (unreliable) [eb459c20] [c068bdb0] rt_spin_lock+0x40/0x98 [eb459c40] [c03d2a14] atomic64_read+0x48/0x84 [eb459c60] [c001aaf4] perf_event_interrupt+0xec/0x28c [eb459d10] [c0010138] performance_monitor_exception+0x7c/0x150 [eb459d30] [c0014170] ret_from_except_full+0x0/0x4c So annotate it. In mainline this change documents the low level nature of the lock - otherwise there's no functional difference. Lockdep and Sparse checking will work as usual. Signed-off-by:
Shan Hai <haishan.bai@gmail.com> Reviewed-by:
Yong Zhang <yong.zhang0@gmail.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- Jul 26, 2011
-
-
Arun Sharma authored
This allows us to move duplicated code in <asm/atomic.h> (atomic_inc_not_zero() for now) to <linux/atomic.h> Signed-off-by:
Arun Sharma <asharma@fb.com> Reviewed-by:
Eric Dumazet <eric.dumazet@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: David Miller <davem@davemloft.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Acked-by:
Mike Frysinger <vapier@gentoo.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Mar 01, 2010
-
-
Luca Barbieri authored
atomic64_add_unless must return 1 if it perfomed the add and 0 otherwise. The generic implementation did the opposite thing. Reported-by:
H. Peter Anvin <hpa@zytor.com> Confirmed-by:
Paul Mackerras <paulus@samba.org> Signed-off-by:
Luca Barbieri <luca@luca-barbieri.com> LKML-Reference: <1267469749-11878-4-git-send-email-luca@luca-barbieri.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
- Jul 30, 2009
-
-
Roland Dreier authored
The generic atomic64_t implementation in lib/ did not export the functions it defined, which means that modules that use atomic64_t would not link on platforms (such as 32-bit powerpc). For example, trying to build a kernel with CONFIG_NET_RDS on such a platform would fail with: ERROR: "atomic64_read" [net/rds/rds.ko] undefined! ERROR: "atomic64_set" [net/rds/rds.ko] undefined! Fix this by exporting the atomic64_t functions to modules. (I export the entire API even if it's not all currently used by in-tree modules to avoid having to continue fixing this in dribs and drabs) Signed-off-by:
Roland Dreier <rolandd@cisco.com> Acked-by:
Paul Mackerras <paulus@samba.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Jun 15, 2009
-
-
Paul Mackerras authored
Many processor architectures have no 64-bit atomic instructions, but we need atomic64_t in order to support the perf_counter subsystem. This adds an implementation of 64-bit atomic operations using hashed spinlocks to provide atomicity. For each atomic operation, the address of the atomic64_t variable is hashed to an index into an array of 16 spinlocks. That spinlock is taken (with interrupts disabled) around the operation, which can then be coded non-atomically within the lock. On UP, all the spinlock manipulation goes away and we simply disable interrupts around each operation. In fact gcc eliminates the whole atomic64_lock variable as well. Signed-off-by:
Paul Mackerras <paulus@samba.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-