- Mar 18, 2019
-
-
Jens Axboe authored
If bio_iov_iter_get_pages() is called on an iov_iter that is flagged with NO_REF, then we don't need to add a page reference for the pages that we add. Add BIO_NO_PAGE_REF to track this in the bio, so IO completion knows not to drop a reference to these pages. Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Jens Axboe authored
For ITER_BVEC, if we're holding on to kernel pages, the caller doesn't need to grab a reference to the bvec pages, and drop that same reference on IO completion. This is essentially safe for any ITER_BVEC, but some use cases end up reusing pages and uncondtionally dropping a page reference on completion. And example of that is sendfile(2), that ends up being a splice_in + splice_out on the pipe pages. Add a flag that tells us it's fine to not grab a page reference to the bvec pages, since that caller knows not to drop a reference when it's done with the pages. Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Mar 08, 2019
-
-
Luc Van Oostenryck authored
The percpu member of this structure is declared as: struct ... ** __percpu member; So its type is: __percpu pointer to pointer to struct ... But looking at how it's used, its type should be: pointer to __percpu pointer to struct ... and it should thus be declared as: struct ... * __percpu *member; So fix the placement of '__percpu' in the definition of this structures. This silents a few Sparse's warnings like: warning: incorrect type in initializer (different address spaces) expected void const [noderef] <asn:3> *__vpp_verify got struct sched_domain ** Link: http://lkml.kernel.org/r/20190118144902.79065-1-luc.vanoostenryck@gmail.com Fixes: 017c59c0 ("relay: Use per CPU constructs for the relay channel buffer pointers") Signed-off-by:
Luc Van Oostenryck <luc.vanoostenryck@gmail.com> Cc: Jens Axboe <axboe@suse.de> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Souptick Joarder authored
Page fault handlers are supposed to return VM_FAULT codes, but some drivers/file systems mistakenly return error numbers. Now that all drivers/file systems have been converted to use the vm_fault_t return type, change the type definition to no longer be compatible with 'int'. By making it an unsigned int, the function prototype becomes incompatible with a function which returns int. Sparse will detect any attempts to return a value which is not a VM_FAULT code. VM_FAULT_SET_HINDEX and VM_FAULT_GET_HINDEX values are changed to avoid conflict with other VM_FAULT codes. [jrdr.linux@gmail.com: fix warnings] Link: http://lkml.kernel.org/r/20190109183742.GA24326@jordon-HP-15-Notebook-PC Link: http://lkml.kernel.org/r/20190108183041.GA12137@jordon-HP-15-Notebook-PC Signed-off-by:
Souptick Joarder <jrdr.linux@gmail.com> Reviewed-by:
William Kucharski <william.kucharski@oracle.com> Reviewed-by:
Mike Rapoport <rppt@linux.ibm.com> Reviewed-by:
Matthew Wilcox <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Rik van Riel <riel@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Dave Rodgman authored
To prevent any issues with persistent data, separate lzo-rle from lzo so that it is treated as a separate algorithm, and lzo is still available. Link: http://lkml.kernel.org/r/20190205155944.16007-3-dave.rodgman@arm.com Signed-off-by:
Dave Rodgman <dave.rodgman@arm.com> Cc: David S. Miller <davem@davemloft.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com> Cc: Matt Sealey <matt.sealey@arm.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <nitingupta910@gmail.com> Cc: Richard Purdie <rpurdie@openedhand.com> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: Sonny Rao <sonnyrao@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Dave Rodgman authored
Patch series "lib/lzo: run-length encoding support", v5. Following on from the previous lzo-rle patchset: https://lkml.org/lkml/2018/11/30/972 This patchset contains only the RLE patches, and should be applied on top of the non-RLE patches ( https://lkml.org/lkml/2019/2/5/366 ). Previously, some questions were raised around the RLE patches. I've done some additional benchmarking to answer these questions. In short: - RLE offers significant additional performance (data-dependent) - I didn't measure any regressions that were clearly outside the noise One concern with this patchset was around performance - specifically, measuring RLE impact separately from Matt Sealey's patches (CTZ & fast copy). I have done some additional benchmarking which I hope clarifies the benefits of each part of the patchset. Firstly, I've captured some memory via /dev/fmem from a Chromebook with many tabs open which is starting to swap, and then split this into 4178 4k pages. I've excluded the all-zero pages (as zram does), and also the no-zero pages (which won't tell us anything about RLE performance). This should give a realistic test dataset for zram. What I found was that the data is VERY bimodal: 44% of pages in this dataset contain 5% or fewer zeros, and 44% contain over 90% zeros (30% if you include the no-zero pages). This supports the idea of special-casing zeros in zram. Next, I've benchmarked four variants of lzo on these pages (on 64-bit Arm at max frequency): baseline LZO; baseline + Matt Sealey's patches (aka MS); baseline + RLE only; baseline + MS + RLE. Numbers are for weighted roundtrip throughput (the weighting reflects that zram does more compression than decompression). https://drive.google.com/file/d/1VLtLjRVxgUNuWFOxaGPwJYhl_hMQXpHe/view?usp=sharing Matt's patches help in all cases for Arm (and no effect on Intel), as expected. RLE also behaves as expected: with few zeros present, it makes no difference; above ~75%, it gives a good improvement (50 - 300 MB/s on top of the benefit from Matt's patches). Best performance is seen with both MS and RLE patches. Finally, I have benchmarked the same dataset on an x86-64 device. Here, the MS patches make no difference (as expected); RLE helps, similarly as on Arm. There were no definite regressions; allowing for observational error, 0.1% (3/4178) of cases had a regression > 1 standard deviation, of which the largest was 4.6% (1.2 standard deviations). I think this is probably within the noise. https://drive.google.com/file/d/1xCUVwmiGD0heEMx5gcVEmLBI4eLaageV/view?usp=sharing One point to note is that the graphs show RLE appears to help very slightly with no zeros present! This is because the extra code causes the clang optimiser to change code layout in a way that happens to have a significant benefit. Taking baseline LZO and adding a do-nothing line like "__builtin_prefetch(out_len);" immediately before the "goto next" has the same effect. So this is a real, but basically spurious effect - it's small enough not to upset the overall findings. This patch (of 3): When using zram, we frequently encounter long runs of zero bytes. This adds a special case which identifies runs of zeros and encodes them using run-length encoding. This is faster for both compression and decompresion. For high-entropy data which doesn't hit this case, impact is minimal. Compression ratio is within a few percent in all cases. This modifies the bitstream in a way which is backwards compatible (i.e., we can decompress old bitstreams, but old versions of lzo cannot decompress new bitstreams). Link: http://lkml.kernel.org/r/20190205155944.16007-2-dave.rodgman@arm.com Signed-off-by:
Dave Rodgman <dave.rodgman@arm.com> Cc: David S. Miller <davem@davemloft.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com> Cc: Matt Sealey <matt.sealey@arm.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <nitingupta910@gmail.com> Cc: Richard Purdie <rpurdie@openedhand.com> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: Sonny Rao <sonnyrao@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Oleg Nesterov authored
Large enterprise clients often run applications out of networked file systems where the IT mandated layout of project volumes can end up leading to paths that are longer than 128 characters. Bumping this up to the next order of two solves this problem in all but the most egregious case while still fitting into a 512b slab. [oleg@redhat.com: update comment, per Kees] Link: http://lkml.kernel.org/r/20181112160956.GA28472@redhat.com Signed-off-by:
Oleg Nesterov <oleg@redhat.com> Reported-by:
Ben Woodard <woodard@redhat.com> Reviewed-by:
Andrew Morton <akpm@linux-foundation.org> Acked-by:
Michal Hocko <mhocko@suse.com> Acked-by:
Kees Cook <keescook@chromium.org> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Ian Kent authored
Add an autofs file system mount option that can be used to provide a generic indicator to applications that the mount entry should be ignored when displaying mount information. In other OSes that provide autofs and that provide a mount list to user space based on the kernel mount list a no-op mount option ("ignore" is the one use on the most common OS) is allowed so that autofs file system users can optionally use it. The idea is that it be used by user space programs to exclude autofs mounts from consideration when reading the mounts list. Prior to the change to link /etc/mtab to /proc/self/mounts all I needed to do to achieve this was to use mount(2) and not update the mtab but now that no longer works. I know the symlinking happened a long time ago and I considered doing this then but, at the time I couldn't remember the commonly used option name and thought persuading the various utility maintainers would be too hard. But now I have a RHEL request to do this for compatibility for a widely used product so I want to go ahead with it and try and enlist the help of some utility package maintainers. Clearly, without the option nothing can be done so it's at least a start. Link: http://lkml.kernel.org/r/154725123970.11260.6113771566924907275.stgit@pluto-themaw-net Signed-off-by:
Ian Kent <raven@themaw.net> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Valdis Kletnieks authored
Sparse issues a warning: CHECK init/calibrate.c init/calibrate.c:271:28: warning: symbol 'calibration_delay_done' was not declared. Should it be static? The actual issue is that it's a __weak symbol that archs can override (in fact, ARM does so), but no prototype is provided. Let's provide one to prevent surprises. Link: http://lkml.kernel.org/r/18827.1548750938@turing-police.cc.vt.edu Signed-off-by:
Valdis Kletnieks <valdis.kletnieks@vt.edu> Reviewed-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Vineet Gupta authored
| > Also, set_mask_bits is used in fs quite a bit and we can possibly come up | > with a generic llsc based implementation (w/o the cmpxchg loop) | | May I also suggest changing the return value of set_mask_bits() to old. | | You can compute the new value given old, but you cannot compute the old | value given new, therefore old is the better return value. Also, no | current user seems to use the return value, so changing it is without | risk. Link: http://lkml.kernel.org/g/20150807110955.GH16853@twins.programming.kicks-ass.net Link: http://lkml.kernel.org/r/1548275584-18096-4-git-send-email-vgupta@synopsys.com Signed-off-by:
Vineet Gupta <vgupta@synopsys.com> Suggested-by:
Peter Zijlstra <peterz@infradead.org> Reviewed-by:
Anthony Yznaga <anthony.yznaga@oracle.com> Acked-by:
Will Deacon <will.deacon@arm.com> Cc: Miklos Szeredi <mszeredi@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
With coming changes on x86-64, all dynamic debug descriptors in a translation unit must have distinct names. The macro _dynamic_func_call takes care of that. No functional change. Link: http://lkml.kernel.org/r/20190212214150.4807-15-linux@rasmusvillemoes.dk Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Acked-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by:
Jason Baron <jbaron@akamai.com> Cc: David Sterba <dsterba@suse.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
If CONFIG_DYNAMIC_DEBUG is not set, acpi_handle_debug directly invokes acpi_handle_printk (if DEBUG) or does a no-printk (if !DEBUG). So this macro is never used. Link: http://lkml.kernel.org/r/20190212214150.4807-14-linux@rasmusvillemoes.dk Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Acked-by:
Jason Baron <jbaron@akamai.com> Acked-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: David Sterba <dsterba@suse.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
dynamic debug may be implemented via static keys, but ACPI is missing out on that runtime benefit since it open-codes one possible definition of DYNAMIC_DEBUG_BRANCH. Link: http://lkml.kernel.org/r/20190212214150.4807-13-linux@rasmusvillemoes.dk Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Acked-by:
Jason Baron <jbaron@akamai.com> Acked-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: David Sterba <dsterba@suse.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
For the upcoming 'define the _ddebug descriptor in assembly', we need all the descriptors in a translation unit to have distinct names (because asm does not understand C scope). The easiest way to achieve that is as usual with an extra level of macros, passing the identifier to use to the innermost macro, generating it via __UNIQUE_ID or something. However, instead of repeating that exercise for dynamic_pr_debug, dynamic_dev_dbg, dynamic_netdev_dbg and dynamic_hex_dump separately, we can use the similarity between their bodies to implement them via a common macro, _dynamic_func_call - though the hex_dump case requires a slight variant, since print_hex_dump does not take the _ddebug descriptor. We'll also get to use that variant elsewhere (btrfs). Link: http://lkml.kernel.org/r/20190212214150.4807-11-linux@rasmusvillemoes.dk Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Acked-by:
Jason Baron <jbaron@akamai.com> Cc: David Sterba <dsterba@suse.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Petr Mladek <pmladek@suse.com> Cc: "Rafael J . Wysocki" <rafael.j.wysocki@intel.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
For symmetry with ddebug_remove_module, and to avoid a bit of ifdeffery in module.c, move the declaration of ddebug_add_module inside #if defined(CONFIG_DYNAMIC_DEBUG) and add a corresponding no-op stub in the #else branch. Link: http://lkml.kernel.org/r/20190212214150.4807-10-linux@rasmusvillemoes.dk Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Acked-by:
Jason Baron <jbaron@akamai.com> Cc: David Sterba <dsterba@suse.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Petr Mladek <pmladek@suse.com> Cc: "Rafael J . Wysocki" <rafael.j.wysocki@intel.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
Instead of defining DEFINE_DYNAMIC_DEBUG_METADATA in terms of a helper DEFINE_DYNAMIC_DEBUG_METADATA_KEY, that needs another helper dd_key_init to be properly defined, just make the various #ifdef branches define a _DPRINTK_KEY_INIT that can be used directly, similar to _DPRINTK_FLAGS_DEFAULT. Link: http://lkml.kernel.org/r/20190212214150.4807-5-linux@rasmusvillemoes.dk Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Acked-by:
Jason Baron <jbaron@akamai.com> Cc: David Sterba <dsterba@suse.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Petr Mladek <pmladek@suse.com> Cc: "Rafael J . Wysocki" <rafael.j.wysocki@intel.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
pr_debug_ratelimited tests the dynamic debug descriptor the old-fashioned way, and doesn't utilize the static key/jump label implementation when CONFIG_JUMP_LABEL is set. Use the DYNAMIC_DEBUG_BRANCH which is defined appropriately. Link: http://lkml.kernel.org/r/20190212214150.4807-4-linux@rasmusvillemoes.dk Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Acked-by:
Petr Mladek <pmladek@suse.com> Acked-by:
Jason Baron <jbaron@akamai.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: David Sterba <dsterba@suse.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: "Rafael J . Wysocki" <rafael.j.wysocki@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
net_dbg_ratelimited tests the dynamic debug descriptor the old-fashioned way, and doesn't utilize the static key/jump label implementation when CONFIG_JUMP_LABEL is set. Use the DYNAMIC_DEBUG_BRANCH which is defined appropriately. Link: http://lkml.kernel.org/r/20190212214150.4807-3-linux@rasmusvillemoes.dk Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Acked-by:
Jason Baron <jbaron@akamai.com> Cc: David Sterba <dsterba@suse.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Petr Mladek <pmladek@suse.com> Cc: "Rafael J . Wysocki" <rafael.j.wysocki@intel.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
Patch series "various dynamic_debug patches", v4. This started as an experiment to see how hard it would be to change the four pointers in struct _ddebug into relative offsets, a la CONFIG_GENERIC_BUG_RELATIVE_POINTERS, thus saving 16 bytes per pr_debug site (and thus exactly making up for the extra space used by the introduction of jump labels in 9049fc74). I stumbled on a few things that are probably worth fixing regardless of whether that goal is deemed worthwhile. Back at v3 (in November), I redid the implementation on top of the fancy new asm-macros stuff. Luckily enough, v3 didn't get picked up, since the asm-macros were backed out again. I still want to do the relative-pointers thing eventually, but we're close to the merge window opening, so here's just most of the "incidental" patches, some of which also serve as preparation for the relative pointers. This patch (of 4): dev_dbg_ratelimited tests the dynamic debug descriptor the old-fashioned way, and doesn't utilize the static key/jump label implementation when CONFIG_JUMP_LABEL is set. Use the DYNAMIC_DEBUG_BRANCH which is defined appropriately. Link: http://lkml.kernel.org/r/20190212214150.4807-2-linux@rasmusvillemoes.dk Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Reviewed-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by:
Jason Baron <jbaron@akamai.com> Cc: David Sterba <dsterba@suse.com> Cc: Petr Mladek <pmladek@suse.com> Cc: "Rafael J . Wysocki" <rafael.j.wysocki@intel.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Nadav Amit authored
Commit 95846ecf ("pid: replace pid bitmap implementation with IDR API") removed next_pidmap() but left its declaration. Remove it. No functional change. Link: http://lkml.kernel.org/r/20190213113736.21922-1-namit@vmware.com Signed-off-by:
Nadav Amit <namit@vmware.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Gargi Sharma <gs051095@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Masahiro Yamada authored
<linux/kernel.h> tends to be cluttered because we often put various sort of unrelated stuff in it. So, we have split out a sensible chunk of code into a separate header from time to time. This commit splits out the *_MAX and *_MIN defines. The standard header <limits.h> contains various MAX, MIN constants including numerial limits. [1] I think it makes sense to move in-kernel MAX, MIN constants into include/linux/limits.h. We already have include/uapi/linux/limits.h to contain some user-space constants. I changed its include guard to _UAPI_LINUX_LIMITS_H. This change has no impact to the user-space because scripts/headers_install.sh rips off the '_UAPI' prefix from the include guards of exported headers. [1] http://pubs.opengroup.org/onlinepubs/009604499/basedefs/limits.h.html Link: http://lkml.kernel.org/r/1549156242-20806-2-git-send-email-yamada.masahiro@socionext.com Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Alex Elder <elder@linaro.org> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Zhang Yanmin <yanmin.zhang@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Masahiro Yamada authored
The commit log of 44f564a4 ("ipc: add definitions of USHORT_MAX and others") did not explain why it used (s16) and (u16) instead of (short) and (unsigned short). Let's use (short) and (unsigned short), which is more sensible, and more consistent with the other MAX/MIN defines. As you see in include/uapi/asm-generic/int-ll64.h, s16/u16 are typedef'ed as signed/unsigned short. So, this commit does not have a functional change. Remove the unneeded parentheses around ~0U while we are here. Link: http://lkml.kernel.org/r/1549156242-20806-1-git-send-email-yamada.masahiro@socionext.com Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Zhang Yanmin <yanmin.zhang@intel.com> Cc: Alex Elder <elder@linaro.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
Instead of doing this compile-time check in some slightly arbitrary user of struct filename, put it next to the definition. Link: http://lkml.kernel.org/r/20190208203015.29702-3-linux@rasmusvillemoes.dk Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Kees Cook <keescook@chromium.org> Cc: Luc Van Oostenryck <luc.vanoostenryck@gmail.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
BUILD_BUG_ON() is a little annoying, since it cannot be used outside function scope. So one cannot put assertions about the sizeof() a struct next to the struct definition, but has to hide that in some more or less arbitrary function. Since gcc 4.6 (which is now also the required minimum), there is support for the C11 _Static_assert in all C modes, including gnu89. So add a simple wrapper for that. _Static_assert() requires a message argument, which is usually quite redundant (and I believe that bug got fixed at least in newer C++ standards), but we can easily work around that with a little macro magic, making it optional. For example, adding static_assert(sizeof(struct printf_spec) == 8); in vsprintf.c and modifying that struct to violate it, one gets ./include/linux/build_bug.h:78:41: error: static assertion failed: "sizeof(struct printf_spec) == 8" #define __static_assert(expr, msg, ...) _Static_assert(expr, "" msg "") godbolt.org suggests that _Static_assert() has been support by clang since at least 3.0.0. Link: http://lkml.kernel.org/r/20190208203015.29702-1-linux@rasmusvillemoes.dk Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Acked-by:
Alexey Dobriyan <adobriyan@gmail.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Kees Cook <keescook@chromium.org> Cc: Luc Van Oostenryck <luc.vanoostenryck@gmail.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
WangBo authored
Use "unsigned int" instead of "unsigned", to make code more clear. Link: http://lkml.kernel.org/r/1551354739-6648-1-git-send-email-wdjjwb@163.com Signed-off-by:
WangBo <wang.bo116@zte.com.cn> Reviewed-by:
Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Randy Dunlap authored
The single quotation marks around "const" were causing a documentation markup warning with reST. Instead of fixing that warning, just delete that comment line and the gcc-3.3 hack of using "const" in the roundup() macro since gcc-3.3 is no longer supported for kernel builds. I did around 20 different $arch builds with no problems, but we'll just have to see if this causes problems for anyone else out there. Link: http://lkml.kernel.org/r/ec5dcf72-7c3e-3513-af0c-4003ed598854@infradead.org Signed-off-by:
Randy Dunlap <rdunlap@infradead.org> Suggested-by:
Matthew Wilcox <willy@infradead.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Jani Nikula authored
Include asm/div64.h for do_div() usage in DIV_ROUND_DOWN_ULL() and DIV_ROUND_CLOSEST_ULL(). Remove the old CONFIG_LBDAF=y conditional include. Link: http://lkml.kernel.org/r/20181228153430.23763-1-jani.nikula@intel.com Signed-off-by:
Jani Nikula <jani.nikula@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Mar 07, 2019
-
-
Mattias Jacobsson authored
In preparation for adding WMI support to MODULE_DEVICE_TABLE() move the definition of struct wmi_device_id to mod_devicetable.h and inline guid_string in the struct. Changing guid_string to an inline char array changes the loop conditions when looping over an array of struct wmi_device_id. Therefore update wmi_dev_match()'s loop to check for an empty guid_string instead of a NULL pointer. Signed-off-by:
Mattias Jacobsson <2pi@mok.nu> [dvhart: Move UUID_STRING_LEN define to this patch] Signed-off-by:
Darren Hart (VMware) <dvhart@infradead.org>
-
- Mar 06, 2019
-
-
Jens Axboe authored
This is basically a direct port of bfe4037e, which implements a one-shot poll command through aio. Description below is based on that commit as well. However, instead of adding a POLL command and relying on io_cancel(2) to remove it, we mimic the epoll(2) interface of having a command to add a poll notification, IORING_OP_POLL_ADD, and one to remove it again, IORING_OP_POLL_REMOVE. To poll for a file descriptor the application should submit an sqe of type IORING_OP_POLL. It will poll the fd for the events specified in the poll_events field. Unlike poll or epoll without EPOLLONESHOT this interface always works in one shot mode, that is once the sqe is completed, it will have to be resubmitted. Reviewed-by:
Hannes Reinecke <hare@suse.com> Based-on-code-from: Christoph Hellwig <hch@lst.de> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Josh Poimboeuf authored
The .orc_unwind section is a packed array of 6-byte structs. It's currently aligned to 6 bytes, which is causing warnings in the LLD linker. Six isn't a power of two, so it's not a valid alignment value. The actual alignment doesn't matter much because it's an array of packed structs. An alignment of two is sufficient. In reality it always gets aligned to four bytes because it comes immediately after the 4-byte-aligned .orc_unwind_ip section. Fixes: ee9f8fce ("x86/unwind: Add the ORC unwinder") Reported-by:
Nick Desaulniers <ndesaulniers@google.com> Reported-by:
Dmitry Golovin <dima@golovin.in> Reported-by:
Sedat Dilek <sedat.dilek@gmail.com> Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Tested-by:
Sedat Dilek <sedat.dilek@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://github.com/ClangBuiltLinux/linux/issues/218 Link: https://lkml.kernel.org/r/d55027ee95fe73e952dcd8be90aebd31b0095c45.1551892041.git.jpoimboe@redhat.com
-
Cornelia Huck authored
A virtio transport is free to implement some of the callbacks in virtio_config_ops in a matter that they cannot be called from atomic context (e.g. virtio-ccw, which maps a lot of the callbacks to channel I/O, which is an inherently asynchronous mechanism). This can be very surprising for developers using the much more common virtio-pci transport, just to find out that things break when used on s390. The documentation for virtio_config_ops now contains a comment explaining this, but it makes sense to add a might_sleep() annotation to various wrapper functions in the virtio core to avoid surprises later. Note that annotations are NOT added to two classes of calls: - direct calls from device drivers (all current callers should be fine, however) - calls which clearly won't be made from atomic context (such as those ultimately coming in via the driver core) Signed-off-by:
Cornelia Huck <cohuck@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Joerg Roedel authored
This function returns the maximum segment size for a single dma transaction of a virtio device. The possible limit comes from the SWIOTLB implementation in the Linux kernel, that has an upper limit of (currently) 256kb of contiguous memory it can map. Other DMA-API implementations might also have limits. Use the new dma_max_mapping_size() function to determine the maximum mapping size when DMA-API is in use for virtio. Cc: stable@vger.kernel.org Reviewed-by:
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Joerg Roedel <jroedel@suse.de> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Joerg Roedel authored
The function returns the maximum size that can be mapped using DMA-API functions. The patch also adds the implementation for direct DMA and a new dma_map_ops pointer so that other implementations can expose their limit. Cc: stable@vger.kernel.org Reviewed-by:
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Joerg Roedel <jroedel@suse.de> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Joerg Roedel authored
This function will be used from dma_direct code to determine the maximum segment size of a dma mapping. Cc: stable@vger.kernel.org Reviewed-by:
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Joerg Roedel <jroedel@suse.de> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Joerg Roedel authored
The function returns the maximum size that can be remapped by the SWIOTLB implementation. This function will be later exposed to users through the DMA-API. Cc: stable@vger.kernel.org Reviewed-by:
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Joerg Roedel <jroedel@suse.de> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Greg Thelen authored
Commit 682aa8e1 ("writeback: implement unlocked_inode_to_wb transaction and use it for stat updates") refers to inode_switch_wb_work_fn() which never got merged. Switch the comments to inode_switch_wbs_work_fn(). Link: http://lkml.kernel.org/r/20190305004617.142590-1-gthelen@google.com Fixes: 682aa8e1 ("writeback: implement unlocked_inode_to_wb transaction and use it for stat updates") Signed-off-by:
Greg Thelen <gthelen@google.com> Reviewed-by:
Andrew Morton <akpm@linux-foundation.org> Acked-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Andrey Ryabinin authored
We have common pattern to access lru_lock from a page pointer: zone_lru_lock(page_zone(page)) Which is silly, because it unfolds to this: &NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)]->zone_pgdat->lru_lock while we can simply do &NODE_DATA(page_to_nid(page))->lru_lock Remove zone_lru_lock() function, since it's only complicate things. Use 'page_pgdat(page)->lru_lock' pattern instead. [aryabinin@virtuozzo.com: a slightly better version of __split_huge_page()] Link: http://lkml.kernel.org/r/20190301121651.7741-1-aryabinin@virtuozzo.com Link: http://lkml.kernel.org/r/20190228083329.31892-2-aryabinin@virtuozzo.com Signed-off-by:
Andrey Ryabinin <aryabinin@virtuozzo.com> Acked-by:
Vlastimil Babka <vbabka@suse.cz> Acked-by:
Mel Gorman <mgorman@techsingularity.net> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: William Kucharski <william.kucharski@oracle.com> Cc: John Hubbard <jhubbard@nvidia.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Andrey Ryabinin authored
workingset_eviction() doesn't use and never did use the @mapping argument. Remove it. Link: http://lkml.kernel.org/r/20190228083329.31892-1-aryabinin@virtuozzo.com Signed-off-by:
Andrey Ryabinin <aryabinin@virtuozzo.com> Acked-by:
Johannes Weiner <hannes@cmpxchg.org> Acked-by:
Rik van Riel <riel@surriel.com> Acked-by:
Vlastimil Babka <vbabka@suse.cz> Acked-by:
Mel Gorman <mgorman@techsingularity.net> Cc: Michal Hocko <mhocko@kernel.org> Cc: William Kucharski <william.kucharski@oracle.com> Cc: John Hubbard <jhubbard@nvidia.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Yu Zhao authored
Declaration of struct node is required regardless. On UMA systems, including compaction.h without preceding node.h shouldn't cause a build error. Link: http://lkml.kernel.org/r/20190208080437.253322-1-yuzhao@google.com Signed-off-by:
Yu Zhao <yuzhao@google.com> Reviewed-by:
Andrew Morton <akpm@linux-foundation.org> Acked-by:
Michal Hocko <mhocko@suse.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
john.hubbard@gmail.com authored
From: John Hubbard <jhubbard@nvidia.com> This combines the common elements of these routines: page_cache_get_speculative() page_cache_add_speculative() This was anticipated by the original author, as shown by the comment in commit ce0ad7f0 ("powerpc/mm: Lockless get_user_pages_fast() for 64-bit (v3)"): "Same as above, but add instead of inc (could just be merged)" There is no intention to introduce any behavioral change, but there is a small risk of that, due to slightly differing ways of expressing the TINY_RCU and related configurations. This also removes the VM_BUG_ON(in_interrupt()) that was in page_cache_add_speculative(), but not in page_cache_get_speculative(). This provides slightly less detection of such bugs, but it given that it was only there on the "add" path anyway, we can likely do without it just fine. And it removes the VM_BUG_ON_PAGE(PageCompound(page) && page != compound_head(page), page); that page_cache_add_speculative() had. Link: http://lkml.kernel.org/r/20190206231016.22734-2-jhubbard@nvidia.com Signed-off-by:
John Hubbard <jhubbard@nvidia.com> Reviewed-by:
Andrew Morton <akpm@linux-foundation.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jeff Layton <jlayton@kernel.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-