Skip to content
Snippets Groups Projects
  1. Dec 19, 2018
    • Ingo Molnar's avatar
      Revert "x86/jump-labels: Macrofy inline assembly code to work around GCC inlining bugs" · e769742d
      Ingo Molnar authored
      
      This reverts commit 5bdcd510.
      
      The macro based workarounds for GCC's inlining bugs caused regressions: distcc
      and other distro build setups broke, and the fixes are not easy nor will they
      solve regressions on already existing installations.
      
      So we are reverting this patch and the 8 followup patches.
      
      What makes this revert easier is that GCC9 will likely include the new 'asm inline'
      syntax that makes inlining of assembly blocks a lot more robust.
      
      This is a superior method to any macro based hackeries - and might even be
      backported to GCC8, which would make all modern distros get the inlining
      fixes as well.
      
      Many thanks to Masahiro Yamada and others for helping sort out these problems.
      
      Reported-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      Reviewed-by: default avatarBorislav Petkov <bp@alien8.de>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Richard Biener <rguenther@suse.de>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Segher Boessenkool <segher@kernel.crashing.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Nadav Amit <namit@vmware.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      e769742d
  2. Dec 18, 2018
    • Chang S. Bae's avatar
      x86/fsgsbase/64: Fix the base write helper functions · 87ab4689
      Chang S. Bae authored
      
      Andy spotted a regression in the fs/gs base helpers after the patch series
      was committed. The helper functions which write fs/gs base are not just
      writing the base, they are also changing the index. That's wrong and needs
      to be separated because writing the base has not to modify the index.
      
      While the regression is not causing any harm right now because the only
      caller depends on that behaviour, it's a guarantee for subtle breakage down
      the road.
      
      Make the index explicitly changed from the caller, instead of including
      the code in the helpers.
      
      Subsequently, the task write helpers do not handle for the current task
      anymore. The range check for a base value is also factored out, to minimize
      code redundancy from the caller.
      
      Fixes: b1378a56 ("x86/fsgsbase/64: Introduce FS/GS base helper functions")
      Suggested-by: default avatarAndy Lutomirski <luto@kernel.org>
      Signed-off-by: default avatarChang S. Bae <chang.seok.bae@intel.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: "H . Peter Anvin" <hpa@zytor.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Link: https://lkml.kernel.org/r/20181126195524.32179-1-chang.seok.bae@intel.com
      87ab4689
  3. Dec 11, 2018
  4. Dec 03, 2018
  5. Nov 28, 2018
    • Thomas Gleixner's avatar
      x86/speculation: Add seccomp Spectre v2 user space protection mode · 6b3e64c2
      Thomas Gleixner authored
      
      If 'prctl' mode of user space protection from spectre v2 is selected
      on the kernel command-line, STIBP and IBPB are applied on tasks which
      restrict their indirect branch speculation via prctl.
      
      SECCOMP enables the SSBD mitigation for sandboxed tasks already, so it
      makes sense to prevent spectre v2 user space to user space attacks as
      well.
      
      The Intel mitigation guide documents how STIPB works:
          
         Setting bit 1 (STIBP) of the IA32_SPEC_CTRL MSR on a logical processor
         prevents the predicted targets of indirect branches on any logical
         processor of that core from being controlled by software that executes
         (or executed previously) on another logical processor of the same core.
      
      Ergo setting STIBP protects the task itself from being attacked from a task
      running on a different hyper-thread and protects the tasks running on
      different hyper-threads from being attacked.
      
      While the document suggests that the branch predictors are shielded between
      the logical processors, the observed performance regressions suggest that
      STIBP simply disables the branch predictor more or less completely. Of
      course the document wording is vague, but the fact that there is also no
      requirement for issuing IBPB when STIBP is used points clearly in that
      direction. The kernel still issues IBPB even when STIBP is used until Intel
      clarifies the whole mechanism.
      
      IBPB is issued when the task switches out, so malicious sandbox code cannot
      mistrain the branch predictor for the next user space task on the same
      logical processor.
      
      Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Casey Schaufler <casey.schaufler@intel.com>
      Cc: Asit Mallick <asit.k.mallick@intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Jon Masters <jcm@redhat.com>
      Cc: Waiman Long <longman9394@gmail.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Dave Stewart <david.c.stewart@intel.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181125185006.051663132@linutronix.de
      
      6b3e64c2
    • Thomas Gleixner's avatar
      x86/speculation: Add prctl() control for indirect branch speculation · 9137bb27
      Thomas Gleixner authored
      
      Add the PR_SPEC_INDIRECT_BRANCH option for the PR_GET_SPECULATION_CTRL and
      PR_SET_SPECULATION_CTRL prctls to allow fine grained per task control of
      indirect branch speculation via STIBP and IBPB.
      
      Invocations:
       Check indirect branch speculation status with
       - prctl(PR_GET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, 0, 0, 0);
      
       Enable indirect branch speculation with
       - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_ENABLE, 0, 0);
      
       Disable indirect branch speculation with
       - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_DISABLE, 0, 0);
      
       Force disable indirect branch speculation with
       - prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_FORCE_DISABLE, 0, 0);
      
      See Documentation/userspace-api/spec_ctrl.rst.
      
      Signed-off-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Casey Schaufler <casey.schaufler@intel.com>
      Cc: Asit Mallick <asit.k.mallick@intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Jon Masters <jcm@redhat.com>
      Cc: Waiman Long <longman9394@gmail.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Dave Stewart <david.c.stewart@intel.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181125185005.866780996@linutronix.de
      9137bb27
    • Thomas Gleixner's avatar
      x86/speculation: Prevent stale SPEC_CTRL msr content · 6d991ba5
      Thomas Gleixner authored
      
      The seccomp speculation control operates on all tasks of a process, but
      only the current task of a process can update the MSR immediately. For the
      other threads the update is deferred to the next context switch.
      
      This creates the following situation with Process A and B:
      
      Process A task 2 and Process B task 1 are pinned on CPU1. Process A task 2
      does not have the speculation control TIF bit set. Process B task 1 has the
      speculation control TIF bit set.
      
      CPU0					CPU1
      					MSR bit is set
      					ProcB.T1 schedules out
      					ProcA.T2 schedules in
      					MSR bit is cleared
      ProcA.T1
        seccomp_update()
        set TIF bit on ProcA.T2
      					ProcB.T1 schedules in
      					MSR is not updated  <-- FAIL
      
      This happens because the context switch code tries to avoid the MSR update
      if the speculation control TIF bits of the incoming and the outgoing task
      are the same. In the worst case ProcB.T1 and ProcA.T2 are the only tasks
      scheduling back and forth on CPU1, which keeps the MSR stale forever.
      
      In theory this could be remedied by IPIs, but chasing the remote task which
      could be migrated is complex and full of races.
      
      The straight forward solution is to avoid the asychronous update of the TIF
      bit and defer it to the next context switch. The speculation control state
      is stored in task_struct::atomic_flags by the prctl and seccomp updates
      already.
      
      Add a new TIF_SPEC_FORCE_UPDATE bit and set this after updating the
      atomic_flags. Check the bit on context switch and force a synchronous
      update of the speculation control if set. Use the same mechanism for
      updating the current task.
      
      Reported-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Casey Schaufler <casey.schaufler@intel.com>
      Cc: Asit Mallick <asit.k.mallick@intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Jon Masters <jcm@redhat.com>
      Cc: Waiman Long <longman9394@gmail.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Dave Stewart <david.c.stewart@intel.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1811272247140.1875@nanos.tec.linutronix.de
      6d991ba5
    • Thomas Gleixner's avatar
      x86/speculation: Prepare for conditional IBPB in switch_mm() · 4c71a2b6
      Thomas Gleixner authored
      
      The IBPB speculation barrier is issued from switch_mm() when the kernel
      switches to a user space task with a different mm than the user space task
      which ran last on the same CPU.
      
      An additional optimization is to avoid IBPB when the incoming task can be
      ptraced by the outgoing task. This optimization only works when switching
      directly between two user space tasks. When switching from a kernel task to
      a user space task the optimization fails because the previous task cannot
      be accessed anymore. So for quite some scenarios the optimization is just
      adding overhead.
      
      The upcoming conditional IBPB support will issue IBPB only for user space
      tasks which have the TIF_SPEC_IB bit set. This requires to handle the
      following cases:
      
        1) Switch from a user space task (potential attacker) which has
           TIF_SPEC_IB set to a user space task (potential victim) which has
           TIF_SPEC_IB not set.
      
        2) Switch from a user space task (potential attacker) which has
           TIF_SPEC_IB not set to a user space task (potential victim) which has
           TIF_SPEC_IB set.
      
      This needs to be optimized for the case where the IBPB can be avoided when
      only kernel threads ran in between user space tasks which belong to the
      same process.
      
      The current check whether two tasks belong to the same context is using the
      tasks context id. While correct, it's simpler to use the mm pointer because
      it allows to mangle the TIF_SPEC_IB bit into it. The context id based
      mechanism requires extra storage, which creates worse code.
      
      When a task is scheduled out its TIF_SPEC_IB bit is mangled as bit 0 into
      the per CPU storage which is used to track the last user space mm which was
      running on a CPU. This bit can be used together with the TIF_SPEC_IB bit of
      the incoming task to make the decision whether IBPB needs to be issued or
      not to cover the two cases above.
      
      As conditional IBPB is going to be the default, remove the dubious ptrace
      check for the IBPB always case and simply issue IBPB always when the
      process changes.
      
      Move the storage to a different place in the struct as the original one
      created a hole.
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Casey Schaufler <casey.schaufler@intel.com>
      Cc: Asit Mallick <asit.k.mallick@intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Jon Masters <jcm@redhat.com>
      Cc: Waiman Long <longman9394@gmail.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Dave Stewart <david.c.stewart@intel.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181125185005.466447057@linutronix.de
      4c71a2b6
    • Thomas Gleixner's avatar
      x86/speculation: Avoid __switch_to_xtra() calls · 5635d999
      Thomas Gleixner authored
      
      The TIF_SPEC_IB bit does not need to be evaluated in the decision to invoke
      __switch_to_xtra() when:
      
       - CONFIG_SMP is disabled
      
       - The conditional STIPB mode is disabled
      
      The TIF_SPEC_IB bit still controls IBPB in both cases so the TIF work mask
      checks might invoke __switch_to_xtra() for nothing if TIF_SPEC_IB is the
      only set bit in the work masks.
      
      Optimize it out by masking the bit at compile time for CONFIG_SMP=n and at
      run time when the static key controlling the conditional STIBP mode is
      disabled.
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Casey Schaufler <casey.schaufler@intel.com>
      Cc: Asit Mallick <asit.k.mallick@intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Jon Masters <jcm@redhat.com>
      Cc: Waiman Long <longman9394@gmail.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Dave Stewart <david.c.stewart@intel.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181125185005.374062201@linutronix.de
      
      5635d999
    • Thomas Gleixner's avatar
      x86/process: Consolidate and simplify switch_to_xtra() code · ff16701a
      Thomas Gleixner authored
      
      Move the conditional invocation of __switch_to_xtra() into an inline
      function so the logic can be shared between 32 and 64 bit.
      
      Remove the handthrough of the TSS pointer and retrieve the pointer directly
      in the bitmap handling function. Use this_cpu_ptr() instead of the
      per_cpu() indirection.
      
      This is a preparatory change so integration of conditional indirect branch
      speculation optimization happens only in one place.
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Casey Schaufler <casey.schaufler@intel.com>
      Cc: Asit Mallick <asit.k.mallick@intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Jon Masters <jcm@redhat.com>
      Cc: Waiman Long <longman9394@gmail.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Dave Stewart <david.c.stewart@intel.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181125185005.280855518@linutronix.de
      ff16701a
    • Tim Chen's avatar
      x86/speculation: Prepare for per task indirect branch speculation control · 5bfbe3ad
      Tim Chen authored
      
      To avoid the overhead of STIBP always on, it's necessary to allow per task
      control of STIBP.
      
      Add a new task flag TIF_SPEC_IB and evaluate it during context switch if
      SMT is active and flag evaluation is enabled by the speculation control
      code. Add the conditional evaluation to x86_virt_spec_ctrl() as well so the
      guest/host switch works properly.
      
      This has no effect because TIF_SPEC_IB cannot be set yet and the static key
      which controls evaluation is off. Preparatory patch for adding the control
      code.
      
      [ tglx: Simplify the context switch logic and make the TIF evaluation
        	depend on SMP=y and on the static key controlling the conditional
        	update. Rename it to TIF_SPEC_IB because it controls both STIBP and
        	IBPB ]
      
      Signed-off-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Casey Schaufler <casey.schaufler@intel.com>
      Cc: Asit Mallick <asit.k.mallick@intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Jon Masters <jcm@redhat.com>
      Cc: Waiman Long <longman9394@gmail.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Dave Stewart <david.c.stewart@intel.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181125185005.176917199@linutronix.de
      
      5bfbe3ad
    • Thomas Gleixner's avatar
      x86/speculation: Add command line control for indirect branch speculation · fa1202ef
      Thomas Gleixner authored
      
      Add command line control for user space indirect branch speculation
      mitigations. The new option is: spectre_v2_user=
      
      The initial options are:
      
          -  on:   Unconditionally enabled
          - off:   Unconditionally disabled
          -auto:   Kernel selects mitigation (default off for now)
      
      When the spectre_v2= command line argument is either 'on' or 'off' this
      implies that the application to application control follows that state even
      if a contradicting spectre_v2_user= argument is supplied.
      
      Originally-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Casey Schaufler <casey.schaufler@intel.com>
      Cc: Asit Mallick <asit.k.mallick@intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Jon Masters <jcm@redhat.com>
      Cc: Waiman Long <longman9394@gmail.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Dave Stewart <david.c.stewart@intel.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181125185005.082720373@linutronix.de
      fa1202ef
    • Thomas Gleixner's avatar
      x86/speculation: Rename SSBD update functions · 26c4d75b
      Thomas Gleixner authored
      
      During context switch, the SSBD bit in SPEC_CTRL MSR is updated according
      to changes of the TIF_SSBD flag in the current and next running task.
      
      Currently, only the bit controlling speculative store bypass disable in
      SPEC_CTRL MSR is updated and the related update functions all have
      "speculative_store" or "ssb" in their names.
      
      For enhanced mitigation control other bits in SPEC_CTRL MSR need to be
      updated as well, which makes the SSB names inadequate.
      
      Rename the "speculative_store*" functions to a more generic name. No
      functional change.
      
      Signed-off-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Casey Schaufler <casey.schaufler@intel.com>
      Cc: Asit Mallick <asit.k.mallick@intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Jon Masters <jcm@redhat.com>
      Cc: Waiman Long <longman9394@gmail.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Dave Stewart <david.c.stewart@intel.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181125185004.058866968@linutronix.de
      
      
      26c4d75b
    • Tim Chen's avatar
      x86/speculation: Update the TIF_SSBD comment · 8eb729b7
      Tim Chen authored
      
      "Reduced Data Speculation" is an obsolete term. The correct new name is
      "Speculative store bypass disable" - which is abbreviated into SSBD.
      
      Signed-off-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Casey Schaufler <casey.schaufler@intel.com>
      Cc: Asit Mallick <asit.k.mallick@intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Jon Masters <jcm@redhat.com>
      Cc: Waiman Long <longman9394@gmail.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Dave Stewart <david.c.stewart@intel.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181125185003.593893901@linutronix.de
      
      8eb729b7
    • Zhenzhong Duan's avatar
      x86/retpoline: Remove minimal retpoline support · ef014aae
      Zhenzhong Duan authored
      
      Now that CONFIG_RETPOLINE hard depends on compiler support, there is no
      reason to keep the minimal retpoline support around which only provided
      basic protection in the assembly files.
      
      Suggested-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarZhenzhong Duan <zhenzhong.duan@oracle.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: <srinivas.eeda@oracle.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/f06f0a89-5587-45db-8ed2-0a9d6638d5c0@default
      
      ef014aae
    • Zhenzhong Duan's avatar
      x86/retpoline: Make CONFIG_RETPOLINE depend on compiler support · 4cd24de3
      Zhenzhong Duan authored
      
      Since retpoline capable compilers are widely available, make
      CONFIG_RETPOLINE hard depend on the compiler capability.
      
      Break the build when CONFIG_RETPOLINE is enabled and the compiler does not
      support it. Emit an error message in that case:
      
       "arch/x86/Makefile:226: *** You are building kernel with non-retpoline
        compiler, please update your compiler..  Stop."
      
      [dwmw: Fail the build with non-retpoline compiler]
      
      Suggested-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarZhenzhong Duan <zhenzhong.duan@oracle.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Michal Marek <michal.lkml@markovi.net>
      Cc: <srinivas.eeda@oracle.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/cca0cb20-f9e2-4094-840b-fb0f8810cd34@default
      
      4cd24de3
  6. Nov 27, 2018
  7. Nov 20, 2018
  8. Nov 09, 2018
    • Juergen Gross's avatar
      x86/xen: fix pv boot · 1457d8cf
      Juergen Gross authored
      
      Commit 9da3f2b7 ("x86/fault: BUG() when uaccess helpers fault on
      kernel addresses") introduced a regression for booting Xen PV guests.
      
      Xen PV guests are using __put_user() and __get_user() for accessing the
      p2m map (physical to machine frame number map) as accesses might fail
      in case of not populated areas of the map.
      
      With above commit using __put_user() and __get_user() for accessing
      kernel pages is no longer valid. So replace the Xen hack by adding
      appropriate p2m access functions using the default fixup handler.
      
      Fixes: 9da3f2b7 ("x86/fault: BUG() when uaccess helpers fault on kernel addresses")
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      Reviewed-by: default avatarAndrew Cooper <andrew.cooper3@citrix.com>
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      1457d8cf
  9. Nov 06, 2018
    • Kirill A. Shutemov's avatar
      x86/mm: Move LDT remap out of KASLR region on 5-level paging · d52888aa
      Kirill A. Shutemov authored
      
      On 5-level paging the LDT remap area is placed in the middle of the KASLR
      randomization region and it can overlap with the direct mapping, the
      vmalloc or the vmap area.
      
      The LDT mapping is per mm, so it cannot be moved into the P4D page table
      next to the CPU_ENTRY_AREA without complicating PGD table allocation for
      5-level paging.
      
      The 4 PGD slot gap just before the direct mapping is reserved for
      hypervisors, so it cannot be used.
      
      Move the direct mapping one slot deeper and use the resulting gap for the
      LDT remap area. The resulting layout is the same for 4 and 5 level paging.
      
      [ tglx: Massaged changelog ]
      
      Fixes: f55f0501 ("x86/pti: Put the LDT in its own PGD if PTI is on")
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: bp@alien8.de
      Cc: hpa@zytor.com
      Cc: dave.hansen@linux.intel.com
      Cc: peterz@infradead.org
      Cc: boris.ostrovsky@oracle.com
      Cc: jgross@suse.com
      Cc: bhe@redhat.com
      Cc: willy@infradead.org
      Cc: linux-mm@kvack.org
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181026122856.66224-2-kirill.shutemov@linux.intel.com
      d52888aa
    • Vishal Verma's avatar
      acpi/nfit, x86/mce: Validate a MCE's address before using it · e8a308e5
      Vishal Verma authored
      
      The NFIT machine check handler uses the physical address from the mce
      structure, and compares it against information in the ACPI NFIT table
      to determine whether that location lies on an NVDIMM. The mce->addr
      field however may not always be valid, and this is indicated by the
      MCI_STATUS_ADDRV bit in the status field.
      
      Export mce_usable_address() which already performs validation for the
      address, and use it in the NFIT handler.
      
      Fixes: 6839a6d9 ("nfit: do an ARS scrub on hitting a latent media error")
      Reported-by: default avatarRobert Elliott <elliott@hpe.com>
      Signed-off-by: default avatarVishal Verma <vishal.l.verma@intel.com>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      CC: Arnd Bergmann <arnd@arndb.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      CC: Dave Jiang <dave.jiang@intel.com>
      CC: elliott@hpe.com
      CC: "H. Peter Anvin" <hpa@zytor.com>
      CC: Ingo Molnar <mingo@redhat.com>
      CC: Len Brown <lenb@kernel.org>
      CC: linux-acpi@vger.kernel.org
      CC: linux-edac <linux-edac@vger.kernel.org>
      CC: linux-nvdimm@lists.01.org
      CC: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
      CC: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      CC: Ross Zwisler <zwisler@kernel.org>
      CC: stable <stable@vger.kernel.org>
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Tony Luck <tony.luck@intel.com>
      CC: x86-ml <x86@kernel.org>
      CC: Yazen Ghannam <yazen.ghannam@amd.com>
      Link: http://lkml.kernel.org/r/20181026003729.8420-2-vishal.l.verma@intel.com
      e8a308e5
    • Vishal Verma's avatar
      acpi/nfit, x86/mce: Handle only uncorrectable machine checks · 5d96c934
      Vishal Verma authored
      
      The MCE handler for nfit devices is called for memory errors on a
      Non-Volatile DIMM and adds the error location to a 'badblocks' list.
      This list is used by the various NVDIMM drivers to avoid consuming known
      poison locations during IO.
      
      The MCE handler gets called for both corrected and uncorrectable errors.
      Until now, both kinds of errors have been added to the badblocks list.
      However, corrected memory errors indicate that the problem has already
      been fixed by hardware, and the resulting interrupt is merely a
      notification to Linux.
      
      As far as future accesses to that location are concerned, it is
      perfectly fine to use, and thus doesn't need to be included in the above
      badblocks list.
      
      Add a check in the nfit MCE handler to filter out corrected mce events,
      and only process uncorrectable errors.
      
      Fixes: 6839a6d9 ("nfit: do an ARS scrub on hitting a latent media error")
      Reported-by: default avatarOmar Avelar <omar.avelar@intel.com>
      Signed-off-by: default avatarVishal Verma <vishal.l.verma@intel.com>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      CC: Arnd Bergmann <arnd@arndb.de>
      CC: Dan Williams <dan.j.williams@intel.com>
      CC: Dave Jiang <dave.jiang@intel.com>
      CC: elliott@hpe.com
      CC: "H. Peter Anvin" <hpa@zytor.com>
      CC: Ingo Molnar <mingo@redhat.com>
      CC: Len Brown <lenb@kernel.org>
      CC: linux-acpi@vger.kernel.org
      CC: linux-edac <linux-edac@vger.kernel.org>
      CC: linux-nvdimm@lists.01.org
      CC: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
      CC: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      CC: Ross Zwisler <zwisler@kernel.org>
      CC: stable <stable@vger.kernel.org>
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Tony Luck <tony.luck@intel.com>
      CC: x86-ml <x86@kernel.org>
      CC: Yazen Ghannam <yazen.ghannam@amd.com>
      Link: http://lkml.kernel.org/r/20181026003729.8420-1-vishal.l.verma@intel.com
      5d96c934
  10. Nov 05, 2018
  11. Nov 03, 2018
    • Peter Zijlstra's avatar
      x86/qspinlock: Fix compile error · b987ffc1
      Peter Zijlstra authored
      
      With a compiler that has asm-goto but not asm-cc-output and
      CONFIG_PROFILE_ALL_BRANCHES=y we get a compiler error:
      
        arch/x86/include/asm/rmwcc.h:23:17: error: jump into statement expression
      
      Fix this by writing the if() as a boolean multiplication instead.
      
      Reported-by: default avatarkbuild test robot <lkp@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-kernel@vger.kernel.org
      Fixes: 7aa54be2 ("locking/qspinlock, x86: Provide liveness guarantee")
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      b987ffc1
  12. Nov 01, 2018
    • Dmitry Safonov's avatar
      x86/compat: Adjust in_compat_syscall() to generic code under !COMPAT · a846446b
      Dmitry Safonov authored
      
      The result of in_compat_syscall() can be pictured as:
      
      x86 platform:
          ---------------------------------------------------
          |  Arch\syscall  |  64-bit  |   ia32   |   x32    |
          |-------------------------------------------------|
          |     x86_64     |  false   |   true   |   true   |
          |-------------------------------------------------|
          |      i686      |          |  <true>  |          |
          ---------------------------------------------------
      
      Other platforms:
          -------------------------------------------
          |  Arch\syscall  |  64-bit  |   compat    |
          |-----------------------------------------|
          |     64-bit     |  false   |    true     |
          |-----------------------------------------|
          |    32-bit(?)   |          |   <false>   |
          -------------------------------------------
      
      As seen, the result of in_compat_syscall() on generic 32-bit platform
      differs from i686.
      
      There is no reason for in_compat_syscall() == true on native i686.  It also
      easy to misread code if the result on native 32-bit platform differs
      between arches.
      
      Because of that non arch-specific code has many places with:
          if (IS_ENABLED(CONFIG_COMPAT) && in_compat_syscall())
      in different variations.
      
      It looks-like the only non-x86 code which uses in_compat_syscall() not
      under CONFIG_COMPAT guard is in amd/amdkfd. But according to the commit
      a18069c1 ("amdkfd: Disable support for 32-bit user processes"), it
      actually should be disabled on native i686.
      
      Rename in_compat_syscall() to in_32bit_syscall() for x86-specific code
      and make in_compat_syscall() false under !CONFIG_COMPAT.
      
      A follow on patch will clean up generic users which were forced to check
      IS_ENABLED(CONFIG_COMPAT) with in_compat_syscall().
      
      Signed-off-by: default avatarDmitry Safonov <dima@arista.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Dmitry Safonov <0x7f454c46@gmail.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Steffen Klassert <steffen.klassert@secunet.com>
      Cc: Stephen Boyd <sboyd@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: linux-efi@vger.kernel.org
      Cc: netdev@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181012134253.23266-2-dima@arista.com
      a846446b
  13. Oct 31, 2018
  14. Oct 30, 2018
  15. Oct 29, 2018
    • Sebastian Andrzej Siewior's avatar
      x86/mm/pat: Disable preemption around __flush_tlb_all() · f77084d9
      Sebastian Andrzej Siewior authored
      
      The WARN_ON_ONCE(__read_cr3() != build_cr3()) in switch_mm_irqs_off()
      triggers every once in a while during a snapshotted system upgrade.
      
      The warning triggers since commit decab088 ("x86/mm: Remove
      preempt_disable/enable() from __native_flush_tlb()"). The callchain is:
      
        get_page_from_freelist() -> post_alloc_hook() -> __kernel_map_pages()
      
      with CONFIG_DEBUG_PAGEALLOC enabled.
      
      Disable preemption during CR3 reset / __flush_tlb_all() and add a comment
      why preemption has to be disabled so it won't be removed accidentaly.
      
      Add another preemptible() check in __flush_tlb_all() to catch callers with
      enabled preemption when PGE is enabled, because PGE enabled does not
      trigger the warning in __native_flush_tlb(). Suggested by Andy Lutomirski.
      
      Fixes: decab088 ("x86/mm: Remove preempt_disable/enable() from __native_flush_tlb()")
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181017103432.zgv46nlu3hc7k4rq@linutronix.de
      f77084d9
  16. Oct 26, 2018
Loading