- Mar 08, 2019
-
-
Dave Rodgman authored
Patch series "lib/lzo: run-length encoding support", v5. Following on from the previous lzo-rle patchset: https://lkml.org/lkml/2018/11/30/972 This patchset contains only the RLE patches, and should be applied on top of the non-RLE patches ( https://lkml.org/lkml/2019/2/5/366 ). Previously, some questions were raised around the RLE patches. I've done some additional benchmarking to answer these questions. In short: - RLE offers significant additional performance (data-dependent) - I didn't measure any regressions that were clearly outside the noise One concern with this patchset was around performance - specifically, measuring RLE impact separately from Matt Sealey's patches (CTZ & fast copy). I have done some additional benchmarking which I hope clarifies the benefits of each part of the patchset. Firstly, I've captured some memory via /dev/fmem from a Chromebook with many tabs open which is starting to swap, and then split this into 4178 4k pages. I've excluded the all-zero pages (as zram does), and also the no-zero pages (which won't tell us anything about RLE performance). This should give a realistic test dataset for zram. What I found was that the data is VERY bimodal: 44% of pages in this dataset contain 5% or fewer zeros, and 44% contain over 90% zeros (30% if you include the no-zero pages). This supports the idea of special-casing zeros in zram. Next, I've benchmarked four variants of lzo on these pages (on 64-bit Arm at max frequency): baseline LZO; baseline + Matt Sealey's patches (aka MS); baseline + RLE only; baseline + MS + RLE. Numbers are for weighted roundtrip throughput (the weighting reflects that zram does more compression than decompression). https://drive.google.com/file/d/1VLtLjRVxgUNuWFOxaGPwJYhl_hMQXpHe/view?usp=sharing Matt's patches help in all cases for Arm (and no effect on Intel), as expected. RLE also behaves as expected: with few zeros present, it makes no difference; above ~75%, it gives a good improvement (50 - 300 MB/s on top of the benefit from Matt's patches). Best performance is seen with both MS and RLE patches. Finally, I have benchmarked the same dataset on an x86-64 device. Here, the MS patches make no difference (as expected); RLE helps, similarly as on Arm. There were no definite regressions; allowing for observational error, 0.1% (3/4178) of cases had a regression > 1 standard deviation, of which the largest was 4.6% (1.2 standard deviations). I think this is probably within the noise. https://drive.google.com/file/d/1xCUVwmiGD0heEMx5gcVEmLBI4eLaageV/view?usp=sharing One point to note is that the graphs show RLE appears to help very slightly with no zeros present! This is because the extra code causes the clang optimiser to change code layout in a way that happens to have a significant benefit. Taking baseline LZO and adding a do-nothing line like "__builtin_prefetch(out_len);" immediately before the "goto next" has the same effect. So this is a real, but basically spurious effect - it's small enough not to upset the overall findings. This patch (of 3): When using zram, we frequently encounter long runs of zero bytes. This adds a special case which identifies runs of zeros and encodes them using run-length encoding. This is faster for both compression and decompresion. For high-entropy data which doesn't hit this case, impact is minimal. Compression ratio is within a few percent in all cases. This modifies the bitstream in a way which is backwards compatible (i.e., we can decompress old bitstreams, but old versions of lzo cannot decompress new bitstreams). Link: http://lkml.kernel.org/r/20190205155944.16007-2-dave.rodgman@arm.com Signed-off-by:
Dave Rodgman <dave.rodgman@arm.com> Cc: David S. Miller <davem@davemloft.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com> Cc: Matt Sealey <matt.sealey@arm.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <nitingupta910@gmail.com> Cc: Richard Purdie <rpurdie@openedhand.com> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: Sonny Rao <sonnyrao@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Matt Sealey authored
Enable faster 8-byte copies on arm64. Link: http://lkml.kernel.org/r/20181127161913.23863-6-dave.rodgman@arm.com Link: http://lkml.kernel.org/r/20190205141950.9058-4-dave.rodgman@arm.com Signed-off-by:
Matt Sealey <matt.sealey@arm.com> Signed-off-by:
Dave Rodgman <dave.rodgman@arm.com> Cc: David S. Miller <davem@davemloft.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <nitingupta910@gmail.com> Cc: Richard Purdie <rpurdie@openedhand.com> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: Sonny Rao <sonnyrao@google.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Matt Sealey authored
LZO leaves some performance on the table by not realising that arm64 can optimize count-trailing-zeros bit operations. Add CONFIG_ARM64 to the checked definitions alongside CONFIG_X86_64 to enable the use of rbit/clz instructions on full 64-bit quantities. Link: http://lkml.kernel.org/r/20181127161913.23863-5-dave.rodgman@arm.com Link: http://lkml.kernel.org/r/20190205141950.9058-3-dave.rodgman@arm.com Signed-off-by:
Matt Sealey <matt.sealey@arm.com> Signed-off-by:
Dave Rodgman <dave.rodgman@arm.com> Cc: David S. Miller <davem@davemloft.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <nitingupta910@gmail.com> Cc: Richard Purdie <rpurdie@openedhand.com> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: Sonny Rao <sonnyrao@google.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Dave Rodgman authored
Patch series "lib/lzo: performance improvements", v5. This patch (of 3): Modify the ifdefs in lzodefs.h to be more consistent with normal kernel macros (e.g., change __aarch64__ to CONFIG_ARM64). Link: http://lkml.kernel.org/r/20190205141950.9058-2-dave.rodgman@arm.com Signed-off-by:
Dave Rodgman <dave.rodgman@arm.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: David S. Miller <davem@davemloft.net> Cc: Nitin Gupta <nitingupta910@gmail.com> Cc: Richard Purdie <rpurdie@openedhand.com> Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: Sonny Rao <sonnyrao@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Matt Sealey <matt.sealey@arm.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Anders Roxell authored
When booting an allmodconfig kernel, there are a lot of false-positives. With a message like this 'UBSAN: Undefined behaviour in...' with a call trace that follows. UBSAN warnings are a result of enabling noisy CONFIG_UBSAN_ALIGNMENT which is disabled by default if HAVE_EFFICIENT_UNALIGNED_ACCESS=y. It's noisy even if don't have efficient unaligned access, e.g. people often add __cacheline_aligned_in_smp in structs, but forget to align allocations of such struct (kmalloc() give 8-byte alignment in worst case). Rework so that when building a allmodconfig kernel that turns everything into '=m' or '=y' will turn off UBSAN_ALIGNMENT. [aryabinin@virtuozzo.com: changelog addition] Link: http://lkml.kernel.org/r/20181217150326.30933-1-anders.roxell@linaro.org Signed-off-by:
Anders Roxell <anders.roxell@linaro.org> Suggested-by:
Arnd Bergmann <arnd@arndb.de> Acked-by:
Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Dan Carpenter authored
The test_fw_config->reqs allocation succeeded so these addresses can't be NULL. Also on the second error path, we forgot to set "rc = -ENOMEM;". Link: http://lkml.kernel.org/r/20190221183700.GA1737@kadam Signed-off-by:
Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by:
Andrew Morton <akpm@linux-foundation.org> Cc: "Luis R. Rodriguez" <mcgrof@kernel.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Gustavo A. R. Silva authored
In preparation to enabling -Wimplicit-fallthrough, mark switch cases where we are expecting to fall through. This patch fixes the following warning: lib/assoc_array.c: In function `assoc_array_delete': lib/assoc_array.c:1110:3: warning: this statement may fall through [-Wimplicit-fallthrough=] for (slot = 0; slot < ASSOC_ARRAY_FAN_OUT; slot++) { ^~~ lib/assoc_array.c:1118:2: note: here case assoc_array_walk_tree_empty: ^~~~ Warning level 3 was used: -Wimplicit-fallthrough=3 This patch is part of the ongoing efforts to enable -Wimplicit-fallthrough. Link: http://lkml.kernel.org/r/20190212212206.GA16378@embeddedor Signed-off-by:
Gustavo A. R. Silva <gustavo@embeddedor.com> Cc: Kees Cook <keescook@chromium.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Olof Johansson authored
Since we now build with -Wvla, any use of VLA throws a warning. Including this test, so... maybe we should just remove the test? lib/test_ubsan.c: In function 'test_ubsan_vla_bound_not_positive': lib/test_ubsan.c:48:2: warning: ISO C90 forbids variable length array 'buf' [-Wvla] For the out-of-bounds test, switch to non-VLA setup. lib/test_ubsan.c: In function 'test_ubsan_out_of_bounds': lib/test_ubsan.c:64:2: warning: ISO C90 forbids variable length array 'arr' [-Wvla] Link: http://lkml.kernel.org/r/20190113183210.56154-1-olof@lixom.net Signed-off-by:
Olof Johansson <olof@lixom.net> Acked-by:
Dmitry Vyukov <dvyukov@google.com> Cc: Colin Ian King <colin.king@canonical.com> Cc: Jinbum Park <jinb.park7@gmail.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Kees Cook <keescook@chromium.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Stanislaw Gruszka authored
fls counts bits starting from 1 to 32 (returns 0 for zero argument). If we add 1 we shift right one bit more and loose precision from divisor, what cause function incorect results with some numbers. Corrected code was tested in user-space, see bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=202391 Link: http://lkml.kernel.org/r/1548686944-11891-1-git-send-email-sgruszka@redhat.com Fixes: 658716d1 ("div64_u64(): improve precision on 32bit platforms") Signed-off-by:
Stanislaw Gruszka <sgruszka@redhat.com> Reported-by:
Siarhei Volkau <lis8215@gmail.com> Tested-by:
Siarhei Volkau <lis8215@gmail.com> Acked-by:
Oleg Nesterov <oleg@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
This serves two purposes: First, we get a diagnostic if (though extremely unlikely), any of the calls of ddebug_add_module for built-in code fails, effectively disabling dynamic_debug. Second, I want to make struct _ddebug opaque, and avoid accessing any of its members outside dynamic_debug.[ch]. Link: http://lkml.kernel.org/r/20190212214150.4807-9-linux@rasmusvillemoes.dk Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Acked-by:
Jason Baron <jbaron@akamai.com> Cc: David Sterba <dsterba@suse.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Petr Mladek <pmladek@suse.com> Cc: "Rafael J . Wysocki" <rafael.j.wysocki@intel.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
The only caller of ddebug_{add,remove}_module outside dynamic_debug.c is kernel/module.c, which is obviously not itself modular (though it would be an interesting exercise to make that happen...). I also fail to see how these interfaces can be used by modules, in-tree or not. Link: http://lkml.kernel.org/r/20190212214150.4807-8-linux@rasmusvillemoes.dk Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Acked-by:
Jason Baron <jbaron@akamai.com> Cc: David Sterba <dsterba@suse.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Petr Mladek <pmladek@suse.com> Cc: "Rafael J . Wysocki" <rafael.j.wysocki@intel.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
Now that we store the passed-in string directly in ddebug_add_module, we can use pointer equality instead of strcmp. This is a little more efficient, but more importantly, this also makes the code somewhat more correct: Currently, if one loads and then unloads a module whose name happens to match the KBUILD_MODNAME of some built-in functionality (which need not even be modular at all), all of their dynamic debug entries vanish along with those of the actual module. For example, loading and unloading a core.ko hides all pr_debugs from drivers/base/core.c and other built-in files called core.c (incidentally, there is an in-tree module whose name is core, but I just tested this with an out-of-tree trivial one). Link: http://lkml.kernel.org/r/20190212214150.4807-7-linux@rasmusvillemoes.dk Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Acked-by:
Jason Baron <jbaron@akamai.com> Cc: David Sterba <dsterba@suse.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Petr Mladek <pmladek@suse.com> Cc: "Rafael J . Wysocki" <rafael.j.wysocki@intel.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
For built-in modules, we're already reusing the passed-in string via kstrdup_const(). But for actual modules (i.e. when we're called from dynamic_debug_setup in module.c), the passed-in string (which points at the name[] array inside struct module) is also guaranteed to live at least as long as the struct ddebug_table, since free_module() calls ddebug_remove_module(). Link: http://lkml.kernel.org/r/20190212214150.4807-6-linux@rasmusvillemoes.dk Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Acked-by:
Jason Baron <jbaron@akamai.com> Cc: David Sterba <dsterba@suse.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Petr Mladek <pmladek@suse.com> Cc: "Rafael J . Wysocki" <rafael.j.wysocki@intel.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
At the time of commit d0484193 ("lib/vsprintf.c: expand field_width to 24 bits"), there was no compiletime_assert/BUILD_BUG/.... variant that could be used outside function scope. Now we have static_assert(), so move the assertion next to the definition instead of hiding it in some arbitrary function. Also add the appropriate #include to avoid relying on build_bug.h being pulled in via some arbitrary chain of includes. Link: http://lkml.kernel.org/r/20190208203015.29702-2-linux@rasmusvillemoes.dk Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Kees Cook <keescook@chromium.org> Cc: Luc Van Oostenryck <luc.vanoostenryck@gmail.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Mar 06, 2019
-
-
Changbin Du authored
Move the PAGE_OWNER option from submenu "Compile-time checks and compiler options" to dedicated submenu "Memory Debugging". Link: http://lkml.kernel.org/r/20190120024254.6270-1-changbin.du@gmail.com Signed-off-by:
Changbin Du <changbin.du@gmail.com> Acked-by:
Vlastimil Babka <vbabka@suse.cz> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Randy Dunlap <rdunlap@infradead.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Uladzislau Rezki (Sony) authored
This adds a new kernel module for analysis of vmalloc allocator. It is only enabled as a module. There are two main reasons this module should be used for: performance evaluation and stressing of vmalloc subsystem. It consists of several test cases. As of now there are 8. The module has five parameters we can specify to change its the behaviour. 1) run_test_mask - set of tests to be run id: 1, name: fix_size_alloc_test id: 2, name: full_fit_alloc_test id: 4, name: long_busy_list_alloc_test id: 8, name: random_size_alloc_test id: 16, name: fix_align_alloc_test id: 32, name: random_size_align_alloc_test id: 64, name: align_shift_alloc_test id: 128, name: pcpu_alloc_test By default all tests are in run test mask. If you want to select some specific tests it is possible to pass the mask. For example for first, second and fourth tests we go 11 value. 2) test_repeat_count - how many times each test should be repeated By default it is one time per test. It is possible to pass any number. As high the value is the test duration gets increased. 3) test_loop_count - internal test loop counter. By default it is set to 1000000. 4) single_cpu_test - use one CPU to run the tests By default this parameter is set to false. It means that all online CPUs execute tests. By setting it to 1, the tests are executed by first online CPU only. 5) sequential_test_order - run tests in sequential order By default this parameter is set to false. It means that before running tests the order is shuffled. It is possible to make it sequential, just set it to 1. Performance analysis: In order to evaluate performance of vmalloc allocations, usually it makes sense to use only one CPU that runs tests, use sequential order, number of repeat tests can be different as well as set of test mask. For example if we want to run all tests, to use one CPU and repeat each test 3 times. Insert the module passing following parameters: single_cpu_test=1 sequential_test_order=1 test_repeat_count=3 with following output: <snip> Summary: fix_size_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 901177 usec Summary: full_fit_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 1039341 usec Summary: long_busy_list_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 11775763 usec Summary: random_size_alloc_test passed 3: failed: 0 repeat: 3 loops: 1000000 avg: 6081992 usec Summary: fix_align_alloc_test passed: 3 failed: 0 repeat: 3, loops: 1000000 avg: 2003712 usec Summary: random_size_align_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 2895689 usec Summary: align_shift_alloc_test passed: 0 failed: 3 repeat: 3 loops: 1000000 avg: 573 usec Summary: pcpu_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 95802 usec All test took CPU0=192945605995 cycles <snip> The align_shift_alloc_test is expected to be failed. Stressing: In order to stress the vmalloc subsystem we run all available test cases on all available CPUs simultaneously. In order to prevent constant behaviour pattern, the test cases array is shuffled by default to randomize the order of test execution. For example if we want to run all tests(default), use all online CPUs(default) with shuffled order(default) and to repeat each test 30 times. The command would be like: modprobe vmalloc_test test_repeat_count=30 Expected results are the system is alive, there are no any BUG_ONs or Kernel Panics the tests are completed, no memory leaks. [urezki@gmail.com: fix 32-bit builds] Link: http://lkml.kernel.org/r/20190106214839.ffvjvmrn52uqog7k@pc636 [urezki@gmail.com: make CONFIG_TEST_VMALLOC depend on CONFIG_MMU] Link: http://lkml.kernel.org/r/20190219085441.s6bg2gpy4esny5vw@pc636 Link: http://lkml.kernel.org/r/20190103142108.20744-3-urezki@gmail.com Signed-off-by:
Uladzislau Rezki (Sony) <urezki@gmail.com> Cc: Kees Cook <keescook@chromium.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Anshuman Khandual authored
Patch series "Replace all open encodings for NUMA_NO_NODE", v3. All these places for replacement were found by running the following grep patterns on the entire kernel code. Please let me know if this might have missed some instances. This might also have replaced some false positives. I will appreciate suggestions, inputs and review. 1. git grep "nid == -1" 2. git grep "node == -1" 3. git grep "nid = -1" 4. git grep "node = -1" This patch (of 2): At present there are multiple places where invalid node number is encoded as -1. Even though implicitly understood it is always better to have macros in there. Replace these open encodings for an invalid node number with the global macro NUMA_NO_NODE. This helps remove NUMA related assumptions like 'invalid node' from various places redirecting them to a common definition. Link: http://lkml.kernel.org/r/1545127933-10711-2-git-send-email-anshuman.khandual@arm.com Signed-off-by:
Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by:
David Hildenbrand <david@redhat.com> Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> [ixgbe] Acked-by: Jens Axboe <axboe@kernel.dk> [mtip32xx] Acked-by: Vinod Koul <vkoul@kernel.org> [dmaengine.c] Acked-by: Michael Ellerman <mpe@ellerman.id.au> [powerpc] Acked-by: Doug Ledford <dledford@redhat.com> [drivers/infiniband] Cc: Joseph Qi <jiangqi903@gmail.com> Cc: Hans Verkuil <hverkuil@xs4all.nl> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Andrey Ryabinin authored
Use after scope bugs detector seems to be almost entirely useless for the linux kernel. It exists over two years, but I've seen only one valid bug so far [1]. And the bug was fixed before it has been reported. There were some other use-after-scope reports, but they were false-positives due to different reasons like incompatibility with structleak plugin. This feature significantly increases stack usage, especially with GCC < 9 version, and causes a 32K stack overflow. It probably adds performance penalty too. Given all that, let's remove use-after-scope detector entirely. While preparing this patch I've noticed that we mistakenly enable use-after-scope detection for clang compiler regardless of CONFIG_KASAN_EXTRA setting. This is also fixed now. [1] http://lkml.kernel.org/r/<20171129052106.rhgbjhhis53hkgfn@wfg-t540p.sh.intel.com> Link: http://lkml.kernel.org/r/20190111185842.13978-1-aryabinin@virtuozzo.com Signed-off-by:
Andrey Ryabinin <aryabinin@virtuozzo.com> Acked-by: Will Deacon <will.deacon@arm.com> [arm64] Cc: Qian Cai <cai@lca.pw> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Mar 01, 2019
-
-
Arnd Bergmann authored
Building an arm64 allmodconfig kernel with clang results in over 140 warnings about overly large stack frames, the worst ones being: drivers/gpu/drm/panel/panel-sitronix-st7789v.c:196:12: error: stack frame size of 20224 bytes in function 'st7789v_prepare' drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td028ttec1.c:196:12: error: stack frame size of 13120 bytes in function 'td028ttec1_panel_enable' drivers/usb/host/max3421-hcd.c:1395:1: error: stack frame size of 10048 bytes in function 'max3421_spi_thread' drivers/net/wan/slic_ds26522.c:209:12: error: stack frame size of 9664 bytes in function 'slic_ds26522_probe' drivers/crypto/ccp/ccp-ops.c:2434:5: error: stack frame size of 8832 bytes in function 'ccp_run_cmd' drivers/media/dvb-frontends/stv0367.c:1005:12: error: stack frame size of 7840 bytes in function 'stv0367ter_algo' None of these happen with gcc today, and almost all of these are the result of a single known issue in llvm. Hopefully it will eventually get fixed with the clang-9 release. In the meantime, the best idea I have is to turn off asan-stack for clang-8 and earlier, so we can produce a kernel that is safe to run. I have posted three patches that address the frame overflow warnings that are not addressed by turning off asan-stack, so in combination with this change, we get much closer to a clean allmodconfig build, which in turn is necessary to do meaningful build regression testing. It is still possible to turn on the CONFIG_ASAN_STACK option on all versions of clang, and it's always enabled for gcc, but when CONFIG_COMPILE_TEST is set, the option remains invisible, so allmodconfig and randconfig builds (which are normally done with a forced CONFIG_COMPILE_TEST) will still result in a mostly clean build. Link: http://lkml.kernel.org/r/20190222222950.3997333-1-arnd@arndb.de Link: https://bugs.llvm.org/show_bug.cgi?id=38809 Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Reviewed-by:
Qian Cai <cai@lca.pw> Reviewed-by:
Mark Brown <broonie@kernel.org> Acked-by:
Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Kostya Serebryany <kcc@google.com> Cc: Andrey Konovalov <andreyknvl@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Feb 28, 2019
-
-
Bart Van Assche authored
The patch that frees unused lock classes will modify the behavior of lockdep_free_key_range() and lockdep_reset_lock() depending on whether or not these functions are called from the context of the lockdep selftests. Hence make it easy to detect whether or not lockdep code is called from the context of a lockdep selftest. Signed-off-by:
Bart Van Assche <bvanassche@acm.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Johannes Berg <johannes@sipsolutions.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Waiman Long <longman@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Cc: johannes.berg@intel.com Cc: tj@kernel.org Link: https://lkml.kernel.org/r/20190214230058.196511-10-bvanassche@acm.org Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
- Feb 25, 2019
-
-
Anders Roxell authored
When running BPF test suite the following splat occurs: [ 415.930950] test_bpf: #0 TAX jited:0 [ 415.931067] BUG: assuming atomic context at lib/test_bpf.c:6674 [ 415.946169] in_atomic(): 0, irqs_disabled(): 0, pid: 11556, name: modprobe [ 415.953176] INFO: lockdep is turned off. [ 415.957207] CPU: 1 PID: 11556 Comm: modprobe Tainted: G W 5.0.0-rc7-next-20190220 #1 [ 415.966328] Hardware name: HiKey Development Board (DT) [ 415.971592] Call trace: [ 415.974069] dump_backtrace+0x0/0x160 [ 415.977761] show_stack+0x24/0x30 [ 415.981104] dump_stack+0xc8/0x114 [ 415.984534] __cant_sleep+0xf0/0x108 [ 415.988145] test_bpf_init+0x5e0/0x1000 [test_bpf] [ 415.992971] do_one_initcall+0x90/0x428 [ 415.996837] do_init_module+0x60/0x1e4 [ 416.000614] load_module+0x1de0/0x1f50 [ 416.004391] __se_sys_finit_module+0xc8/0xe0 [ 416.008691] __arm64_sys_finit_module+0x24/0x30 [ 416.013255] el0_svc_common+0x78/0x130 [ 416.017031] el0_svc_handler+0x38/0x78 [ 416.020806] el0_svc+0x8/0xc Rework so that preemption is disabled when we loop over function 'BPF_PROG_RUN(...)'. Fixes: 568f1967 ("bpf: check that BPF programs run with preemption disabled") Suggested-by:
Arnd Bergmann <arnd@arndb.de> Signed-off-by:
Anders Roxell <anders.roxell@linaro.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net>
-
- Feb 22, 2019
-
-
Herbert Xu authored
The rhashtable_walk_init function has been obsolete for more than two years. This patch finally converts its last users over to rhashtable_walk_enter and removes it. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
Johannes Berg <johannes.berg@intel.com>
-
- Feb 21, 2019
-
-
Colin Ian King authored
There are spelling mistakes in warning macro messages. Fix them. Signed-off-by:
Colin Ian King <colin.king@canonical.com> Acked-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Feb 15, 2019
-
-
David Howells authored
Fix the creation of shortcuts for which the length of the index key value is an exact multiple of the machine word size. The problem is that the code that blanks off the unused bits of the shortcut value malfunctions if the number of bits in the last word equals machine word size. This is due to the "<<" operator being given a shift of zero in this case, and so the mask that should be all zeros is all ones instead. This causes the subsequent masking operation to clear everything rather than clearing nothing. Ordinarily, the presence of the hash at the beginning of the tree index key makes the issue very hard to test for, but in this case, it was encountered due to a development mistake that caused the hash output to be either 0 (keyring) or 1 (non-keyring) only. This made it susceptible to the keyctl/unlink/valid test in the keyutils package. The fix is simply to skip the blanking if the shift would be 0. For example, an index key that is 64 bits long would produce a 0 shift and thus a 'blank' of all 1s. This would then be inverted and AND'd onto the index_key, incorrectly clearing the entire last word. Fixes: 3cb98950 ("Add a generic associative array implementation.") Signed-off-by:
David Howells <dhowells@redhat.com> Signed-off-by:
James Morris <james.morris@microsoft.com>
-
Miguel Ojeda authored
The upcoming GCC 9 release extends the -Wmissing-attributes warnings (enabled by -Wall) to C and aliases: it warns when particular function attributes are missing in the aliases but not in their target. In particular, it triggers here because crc32_le_base/__crc32c_le_base aren't __pure while their target crc32_le/__crc32c_le are. These aliases are used by architectures as a fallback in accelerated versions of CRC32. See commit 9784d82d ("lib/crc32: make core crc32() routines weak so they can be overridden"). Therefore, being fallbacks, it is likely that even if the aliases were called from C, there wouldn't be any optimizations possible. Currently, the only user is arm64, which calls this from asm. Still, marking the aliases as __pure makes sense and is a good idea for documentation purposes and possible future optimizations, which also silences the warning. Acked-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by:
Laura Abbott <labbott@redhat.com> Signed-off-by:
Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
-
- Feb 14, 2019
-
-
Jiri Pirko authored
It is possible that there might be an originally parent object with 0 direct users that is in hints no longer considered as parent. Then the weight of this object is 0 and current code ignores him. That's why the total amount of hint objects might be lower than for the original objagg and WARN_ON is hit. Fix this be considering 0 weight valid. Fixes: 9069a381 ("lib: objagg: implement optimization hints assembly and use hints for object creation") Signed-off-by:
Jiri Pirko <jiri@mellanox.com> Reviewed-by:
Ido Schimmel <idosch@mellanox.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
We need to set the error message on this path otherwise some of the callers, such as test_hints_case(), print from an uninitialized pointer. We had a similar bug earlier and set "errmsg" to NULL in the caller, test_delta_action_item(). That code is no longer required so I have removed it. Fixes: 9069a381 ("lib: objagg: implement optimization hints assembly and use hints for object creation") Signed-off-by:
Dan Carpenter <dan.carpenter@oracle.com> Acked-by:
Jiri Pirko <jiri@mellanox.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
There is a typo here. We intended to check "objagg2" but we instead test "objagg" which is not an error pointer. Fixes: 9069a381 ("lib: objagg: implement optimization hints assembly and use hints for object creation") Signed-off-by:
Dan Carpenter <dan.carpenter@oracle.com> Acked-by:
Jiri Pirko <jiri@mellanox.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
We need to set the error code on this path otherwise we return ERR_PTR(0) which would result in a NULL dereference in the caller. Fixes: 9069a381 ("lib: objagg: implement optimization hints assembly and use hints for object creation") Signed-off-by:
Dan Carpenter <dan.carpenter@oracle.com> Acked-by:
Jiri Pirko <jiri@mellanox.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Feb 13, 2019
-
-
Andrea Righi authored
Since kprobe breakpoing handler is using bsearch(), probing on this routine can cause recursive breakpoint problem. int3 ->do_int3() ->ftrace_int3_handler() ->ftrace_location() ->ftrace_location_range() ->bsearch() -> int3 Prohibit probing on bsearch(). Signed-off-by:
Andrea Righi <righi.andrea@gmail.com> Acked-by:
Masami Hiramatsu <mhiramat@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/154998813406.31052.8791425358974650922.stgit@devbox Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
Masami Hiramatsu authored
Since kprobes depends on preempt disable/enable, probing on the preempt debug routines can cause recursive breakpoint bugs. Signed-off-by:
Masami Hiramatsu <mhiramat@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andrea Righi <righi.andrea@gmail.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/154998804911.31052.3541963527929117920.stgit@devbox Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
- Feb 08, 2019
-
-
Jiri Pirko authored
Count number of roots and add it to stats. It is handy for the library user to have this stats available as it can act upon it without counting roots itself. Signed-off-by:
Jiri Pirko <jiri@mellanox.com> Signed-off-by:
Ido Schimmel <idosch@mellanox.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Implement simple greedy algo to find more optimized root-delta tree for a given objagg instance. This "hints" can be used by a driver to: 1) check if the hints are better (driver's choice) than the original objagg tree. Driver does comparison of objagg stats and hints stats. 2) use the hints to create a new objagg instance which will construct the root-delta tree according to the passed hints. Currently, only a simple greedy algorithm is implemented. Basically it finds the roots according to the maximal possible user count including deltas. Signed-off-by:
Jiri Pirko <jiri@mellanox.com> Signed-off-by:
Ido Schimmel <idosch@mellanox.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Signed-off-by:
Jiri Pirko <jiri@mellanox.com> Signed-off-by:
Ido Schimmel <idosch@mellanox.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Feb 04, 2019
-
-
Elena Reshetova authored
This adds an smp_acquire__after_ctrl_dep() barrier on successful decrease of refcounter value from 1 to 0 for refcount_dec(sub)_and_test variants and therefore gives stronger memory ordering guarantees than prior versions of these functions. Co-developed-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Elena Reshetova <elena.reshetova@intel.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Andrea Parri <andrea.parri@amarulasolutions.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: dvyukov@google.com Cc: keescook@chromium.org Cc: stern@rowland.harvard.edu Link: https://lkml.kernel.org/r/1548847131-27854-2-git-send-email-elena.reshetova@intel.com Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
- Feb 01, 2019
-
-
Dan Carpenter authored
There is a copy and paste bug so we set "config->test_driver" to NULL twice instead of setting "config->test_fs". Smatch complains that it leads to a double free: lib/test_kmod.c:840 __kmod_config_init() warn: 'config->test_fs' double freed Link: http://lkml.kernel.org/r/20190121140011.GA14283@kadam Fixes: d9c6a72d ("kmod: add test driver to stress test the module loader") Signed-off-by:
Dan Carpenter <dan.carpenter@oracle.com> Acked-by:
Luis Chamberlain <mcgrof@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Jan 31, 2019
-
-
Sergei Shtylyov authored
devm_ioremap_resource() prefers calling devm_request_mem_region() with a resource name instead of a device name -- this looks pretty iff a resource name isn't specified via a device tree with a "reg-names" property (in this case, a resource name is set to a device node's full name), but if it is, it doesn't really scale since these names are only unique to a given device node, not globally; so, looking at the output of 'cat /proc/iomem', you do not have an idea which memory region belongs to which device (see "dirmap", "regs", and "wbuf" lines below): 08000000-0bffffff : dirmap 48000000-bfffffff : System RAM 48000000-48007fff : reserved 48080000-48b0ffff : Kernel code 48b10000-48b8ffff : reserved 48b90000-48c7afff : Kernel data bc6a4000-bcbfffff : reserved bcc0f000-bebfffff : reserved bec0e000-bec0efff : reserved bec11000-bec11fff : reserved bec12000-bec14fff : reserved bec15000-bfffffff : reserved e6050000-e605004f : gpio@e6050000 e6051000-e605104f : gpio@e6051000 e6052000-e605204f : gpio@e6052000 e6053000-e605304f : gpio@e6053000 e6054000-e605404f : gpio@e6054000 e6055000-e605504f : gpio@e6055000 e6060000-e606050b : pin-controller@e6060000 e6e60000-e6e6003f : e6e60000.serial e7400000-e7400fff : ethernet@e7400000 ee200000-ee2001ff : regs ee208000-ee2080ff : wbuf I think that devm_request_mem_region() should be called with dev_name() despite the region names won't look as pretty as before (however, we gain more consistency with e.g. the serial driver: 08000000-0bffffff : ee200000.rpc 48000000-bfffffff : System RAM 48000000-48007fff : reserved 48080000-48b0ffff : Kernel code 48b10000-48b8ffff : reserved 48b90000-48c7afff : Kernel data bc6a4000-bcbfffff : reserved bcc0f000-bebfffff : reserved bec0e000-bec0efff : reserved bec11000-bec11fff : reserved bec12000-bec14fff : reserved bec15000-bfffffff : reserved e6050000-e605004f : e6050000.gpio e6051000-e605104f : e6051000.gpio e6052000-e605204f : e6052000.gpio e6053000-e605304f : e6053000.gpio e6054000-e605404f : e6054000.gpio e6055000-e605504f : e6055000.gpio e6060000-e606050b : e6060000.pin-controller e6e60000-e6e6003f : e6e60000.serial e7400000-e7400fff : e7400000.ethernet ee200000-ee2001ff : ee200000.rpc ee208000-ee2080ff : ee200000.rpc Fixes: 72f8c0bf ("lib: devres: add convenience function to remap a resource") Signed-off-by:
Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bart Van Assche authored
The test_insert_dup() function from lib/test_rhashtable.c passes a pointer to a stack object to rhltable_init(). Allocate the hash table dynamically to avoid that the following is reported with object debugging enabled: ODEBUG: object (ptrval) is on stack (ptrval), but NOT annotated. WARNING: CPU: 0 PID: 1 at lib/debugobjects.c:368 __debug_object_init+0x312/0x480 Modules linked in: EIP: __debug_object_init+0x312/0x480 Call Trace: ? debug_object_init+0x1a/0x20 ? __init_work+0x16/0x30 ? rhashtable_init+0x1e1/0x460 ? sched_clock_cpu+0x57/0xe0 ? rhltable_init+0xb/0x20 ? test_insert_dup+0x32/0x20f ? trace_hardirqs_on+0x38/0xf0 ? ida_dump+0x10/0x10 ? jhash+0x130/0x130 ? my_hashfn+0x30/0x30 ? test_rht_init+0x6aa/0xab4 ? ida_dump+0x10/0x10 ? test_rhltable+0xc5c/0xc5c ? do_one_initcall+0x67/0x28e ? trace_hardirqs_off+0x22/0xe0 ? restore_all_kernel+0xf/0x70 ? trace_hardirqs_on_thunk+0xc/0x10 ? restore_all_kernel+0xf/0x70 ? kernel_init_freeable+0x142/0x213 ? rest_init+0x230/0x230 ? kernel_init+0x10/0x110 ? schedule_tail_wrapper+0x9/0xc ? ret_from_fork+0x19/0x24 Cc: Thomas Graf <tgraf@suug.ch> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by:
Bart Van Assche <bvanassche@acm.org> Acked-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Jan 22, 2019
-
-
Bo YU authored
There is currently a missing terminating newline in non-switch case match when msg == NULL Signed-off-by:
Bo YU <tsu.yubo@gmail.com> Reviewed-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Bo YU authored
Repalce printk with pr_warn in kobject_synth_uevent and replace printk with pr_err in uevent_net_init to make both consistent with other code in kobject_uevent.c Signed-off-by:
Bo YU <tsu.yubo@gmail.com> Reviewed-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-