Skip to content
Snippets Groups Projects
  1. Aug 31, 2018
  2. Aug 29, 2018
  3. Aug 16, 2018
  4. Aug 10, 2018
  5. Aug 09, 2018
  6. Aug 02, 2018
    • Roman Gushchin's avatar
      samples/bpf: extend test_cgrp2_attach2 test to use cgroup storage · 28ba0687
      Roman Gushchin authored
      
      The test_cgrp2_attach test covers bpf cgroup attachment code well,
      so let's re-use it for testing allocation/releasing of cgroup storage.
      
      The extension is pretty straightforward: the bpf program will use
      the cgroup storage to save the number of transmitted bytes.
      
      Expected output:
        $ ./test_cgrp2_attach2
        Attached DROP prog. This ping in cgroup /foo should fail...
        ping: sendmsg: Operation not permitted
        Attached DROP prog. This ping in cgroup /foo/bar should fail...
        ping: sendmsg: Operation not permitted
        Attached PASS prog. This ping in cgroup /foo/bar should pass...
        Detached PASS from /foo/bar while DROP is attached to /foo.
        This ping in cgroup /foo/bar should fail...
        ping: sendmsg: Operation not permitted
        Attached PASS from /foo/bar and detached DROP from /foo.
        This ping in cgroup /foo/bar should pass...
        ### override:PASS
        ### multi:PASS
      
      Signed-off-by: default avatarRoman Gushchin <guro@fb.com>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      28ba0687
  7. Jul 27, 2018
  8. Jul 17, 2018
  9. Jul 16, 2018
  10. Jul 13, 2018
  11. Jul 11, 2018
  12. Jul 10, 2018
  13. Jul 05, 2018
  14. Jul 04, 2018
  15. Jul 03, 2018
  16. Jun 28, 2018
    • David Ahern's avatar
      bpf: Change bpf_fib_lookup to return lookup status · 4c79579b
      David Ahern authored
      
      For ACLs implemented using either FIB rules or FIB entries, the BPF
      program needs the FIB lookup status to be able to drop the packet.
      Since the bpf_fib_lookup API has not reached a released kernel yet,
      change the return code to contain an encoding of the FIB lookup
      result and return the nexthop device index in the params struct.
      
      In addition, inform the BPF program of any post FIB lookup reason as
      to why the packet needs to go up the stack.
      
      The fib result for unicast routes must have an egress device, so remove
      the check that it is non-NULL.
      
      Signed-off-by: default avatarDavid Ahern <dsahern@gmail.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      4c79579b
    • Jesper Dangaard Brouer's avatar
      samples/bpf: xdp_rxq_info action XDP_TX must adjust MAC-addrs · 509fda10
      Jesper Dangaard Brouer authored
      
      XDP_TX requires also changing the MAC-addrs, else some hardware
      may drop the TX packet before reaching the wire.  This was
      observed with driver mlx5.
      
      If xdp_rxq_info select --action XDP_TX the swapmac functionality
      is activated.  It is also possible to manually enable via cmdline
      option --swapmac.  This is practical if wanting to measure the
      overhead of writing/updating payload for other action types.
      
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarToke Høiland-Jørgensen <toke@toke.dk>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      509fda10
    • Jesper Dangaard Brouer's avatar
      samples/bpf: extend xdp_rxq_info to read packet payload · 0d25c43a
      Jesper Dangaard Brouer authored
      
      There is a cost associated with reading the packet data payload
      that this test ignored.  Add option --read to allow enabling
      reading part of the payload.
      
      This sample/tool helps us analyse an issue observed with a NIC
      mlx5 (ConnectX-5 Ex) and an Intel(R) Xeon(R) CPU E5-1650 v4.
      
      With no_touch of data:
      
      Running XDP on dev:mlx5p1 (ifindex:8) action:XDP_DROP options:no_touch
      XDP stats       CPU     pps         issue-pps
      XDP-RX CPU      0       14,465,157  0
      XDP-RX CPU      1       14,464,728  0
      XDP-RX CPU      2       14,465,283  0
      XDP-RX CPU      3       14,465,282  0
      XDP-RX CPU      4       14,464,159  0
      XDP-RX CPU      5       14,465,379  0
      XDP-RX CPU      total   86,789,992
      
      When not touching data, we observe that the CPUs have idle cycles.
      When reading data the CPUs are 100% busy in softirq.
      
      With reading data:
      
      Running XDP on dev:mlx5p1 (ifindex:8) action:XDP_DROP options:read
      XDP stats       CPU     pps         issue-pps
      XDP-RX CPU      0       9,620,639   0
      XDP-RX CPU      1       9,489,843   0
      XDP-RX CPU      2       9,407,854   0
      XDP-RX CPU      3       9,422,289   0
      XDP-RX CPU      4       9,321,959   0
      XDP-RX CPU      5       9,395,242   0
      XDP-RX CPU      total   56,657,828
      
      The effect seen above is a result of cache-misses occuring when
      more RXQs are being used.  Based on perf-event observations, our
      conclusion is that the CPUs DDIO (Direct Data I/O) choose to
      deliver packet into main memory, instead of L3-cache.  We also
      found, that this can be mitigated by either using less RXQs or by
      reducing NICs the RX-ring size.
      
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarToke Høiland-Jørgensen <toke@toke.dk>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      0d25c43a
  17. Jun 27, 2018
  18. Jun 26, 2018
  19. Jun 09, 2018
  20. Jun 08, 2018
  21. Jun 05, 2018
  22. Jun 04, 2018
  23. May 25, 2018
Loading