NFF-Go -Network Function Framework for GO (former YANFF)

Go Report Card GoDoc Dev chat at https://gitter.im/intel-yanff/Lobby Build Status

Network Function Framework for Go (former YANFF)

Wonderful news : we are now supporting AF_XDP and supporting(almost) getting packets directly from Linux. So you do not need to write 3(three) different applications to process packets coming from different type of drivers of PMDs. You just write everything in NFF-Go, and it can dynamically use whatever you would like underneath. Contact us if you need help.

What it is

NFF-Go is a set of libraries for creating and deploying cloud-native Network Functions (NFs). It simplifies the creation of network functions without sacrificing performance.

  • Higher level abstractions than DPDK. Using DPDK as a fast I/O engine for performance
  • Go language: safety, productivity, performance, concurrency
  • Network functions are application programs not virtual machines
  • Built-in scheduler to auto-scale processing based on input traffic. Both up and down.

Benefits:

  • Easily leverage Intel hardware capabilities: multi-cores, AES-NI, CAT, QAT, DPDK
  • 10x reduction in lines of code
  • No need to be an expert network programmer to develop performant network function
  • Similar performance with C/DPDK per box
  • No need to worry on elasticity - done automatically
  • Take advantage of cloud native deployment: continuous delivery, micro-services, containers

Feel the difference

Simple ACL based firewall

func main() {
	// Initialize NFF-GO library to use 8 cores max.
	config := flow.Config{
		CPUCoresNumber: 8,
	}
	flow.CheckFatal(flow.SystemInit(&config))

	// Get filtering rules from access control file.
	L3Rules, err := packet.GetL3ACLFromTextTable("Firewall.conf")
	flow.CheckFatal(err)

	// Receive packets from zero port. Receive queue will be added automatically.
	inputFlow, err := flow.SetReceiver(uint8(0))
	flow.CheckFatal(err)

	// Separate packet flow based on ACL.
	rejectFlow, err := flow.SetSeparator(inputFlow, L3Separator, nil)
	flow.CheckFatal(err)

	// Drop rejected packets.
	flow.CheckFatal(flow.SetStopper(rejectFlow))

	// Send accepted packets to first port. Send queue will be added automatically.
	flow.CheckFatal(flow.SetSender(inputFlow, uint8(1)))

	// Begin to process packets.
	flow.CheckFatal(flow.SystemStart())
}

// User defined function for separating packets
func L3Separator(currentPacket *packet.Packet, context flow.UserContext) bool {
	currentPacket.ParseL4()
	// Return whether packet is accepted or not. Based on ACL rules.
	return currentPacket.L3ACLPermit(L3Rules)
}

NFF-GO is an Open Source BSD licensed project that runs mostly in Linux user land. The most recent patches and enhancements provided by the community are available in the develop branch. master branch provides the latest stable released version under the appropriate tag.

Getting NFF-GO

Starting with release 0.7.0 NFF-Go uses go.mod for getting dependencies, therefore Go version 1.11 or later is required. To checkout NFF-Go sources use the following command

    git clone --recurse-submodules http://github.com/intel-go/nff-go

Setting up the build and run environment

DPDK

NFF-GO uses DPDK, so you must setup your system to build and run DPDK. See System Requirements in the DPDK Getting Started Guide for Linux for more information.

By default NFF-Go is build with Mellanox cards support out of the box you need to install additional dependencies required for MLX network drivers. On Ubuntu they are called libmnl-dev and libibverbs-dev. For more details see MLX drivers respective pages for MLX4 and MLX5. If these dependencies cannot be satisfied, and Mellanox drivers are not needed, you can set variable NFF_GO_NO_MLX_DRIVERS to some unempty value to disable MLX drivers compilation.

Additional dependencies are required for pktgen, especially if you are running RedHat or CentOS Linux distributions. See this file for details. LUA section for RedHat and CentOS is in its end.

After building a DPDK driver with the make command, you must register network cards to work with the DPDK driver, load necessary kernel modules, and bind cards to the modules. See Compiling the DPDK Target from Source and How to get best performance with NICs on Intel platforms in the DPDK Getting Started Guide for Linux for more information.

The kernel module, which is required for DPDK user-mode drivers, is built but not installed into kernel directory. You can load it using the full path to the module file: nff-go/test/dpdk/dpdk/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko

Go

Use Go version 1.11.4 or higher. To check the version of Go, do:

    go version

AF_XDP support

AF_XDP support is enabled by default, and it requires you to install libbpf package. At the time of writing Ubuntu doesn't have this library among its packages, so it is necessary to build libbpf from sources or disable AF_XDP socket support.

To disable it set variable NFF_GO_NO_BPF_SUPPORT to some unempty value. When NFF_GO is built with it, AF_XDP support is disaled and using it results in errors.

If you want to build libbpf from sources you can do it in two different ways.

  • If you are using stock Linux kernel from distribution, download libbpf from GitHub, then execute cd src; make; sudo make install. Add /usr/lib64 to your ldconfig path.
  • If you build Linux kernel from sources, you can build libbpf from Linux source tree using commands cd tools/lib/bpf; make; sudo make install install_headers. Add /usr/local/lib64 to your ldconfig path.

Building NFF-GO

When Go compiler runs for the first time it downloads all dependent packages listed in go.mod file. This operation cannot be done in parallel because otherwise Go package cache gets corrupted. Because of that it is necessary to run command go mod download before first make is done. Another option is to use single process make -j1 when it is run for the first time, but may be quite slow.

    cd nff-go
    go mod download        # do it once before first build
    make -j8

Building NFF-GO in debug mode

	make debug -j8

Running NFF-GO

Documentation

Online API documentation is available on godoc.org site. API usage is explained on our Wiki pages.

Tests

Invoking make in the top-level directory builds the testing framework and examples. NFF-GO distributed tests are packaged inside of Docker container images. There are also single node unit tests in some packages that you can run using the command:

     make testing

Docker images

To create Docker images on the local default target (either the default UNIX socket in /var/run/docker.sock or whatever is defined in the DOCKER_HOST variable), use the make images command.

To deploy Docker images for use in distributed testing, use the make deploy command. This command requires two environment variables:

  • NFF_GO_HOSTS="hostname1 hostname2 ... hostnameN"* - a list of all hostnames for deployed test Docker images
  • DOCKER_PORT=2375* - the port number to connect to Docker daemons running on hosts in the NFF_GO_HOSTS variable

To delete generated images in the default Docker target, use the make clean-images command.

Running tests

After the Docker images are deployed on all test hosts, you can run distributed network tests. The test framework is located in the test/main directory and accepts a JSON file with a test specification. There are predefined configs for performance and stability tests in the same directory. To run these tests, change hostname1 and hostname2 to the hosts from the NFF_GO_HOSTS list in these JSON files.

Cleaning-up

To clean all generated binaries, use the make clean command. To delete all deployed images listed in NFF_GO_HOSTS, use the make cleanall command.

Contributing

If you want to contribute to NFF-Go, check our Contributing guide. We also recommend checking the bugs with 'help-wanted' or 'easyfix' in our list of open issues; these bugs can be solved without an extensive knowledge of NFF-Go. We would love to help you start contributing.

You can reach the NFF-Go development team via our mailing list.

Comments
  • Run nff-go on Azure

    Run nff-go on Azure

    Hi, I found a post to run DPDK on Azure Linux virtual machine.

    I'm using Ubuntu 16.04 with kernel 4.15.0-1015-azure, DPDK 18.02, and nff-go 0.4.1.

    In the post, I also found that I have to enable the mellanox PMDs:

    sed -ri 's,(MLX._PMD=)n,\1y,' build/.config
    

    and then I can run the testpmd successfully:

    ./build/app/testpmd -w 0002:00:02.0 --vdev="net_vdev_netvsc0,iface=eth1" -- -i --port-topology=chained
    

    However, I have no idea how to run nff-go on Azure, do you have any suggestion how to configure nff-go?

    Right now I've tried setting the DPDKArgs:

    • []string{"--vdev", "net_vdev_netvsc0,iface=eth1"}
    • []string{"--vdev", "net_vdev_netvsc0,iface=eth1", "-w", "0002:00:02.0"}

    but still doesn't work


    Also, here is my environment:

    • devbind
    Network devices using DPDK-compatible driver
    ============================================
    <none>
    
    Network devices using kernel driver
    ===================================
    0002:00:02.0 'MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function] 1004' if=enP2p0s2 drv=mlx4_core unused=
    
    Other Network devices
    =====================
    <none>
    
    Crypto devices using DPDK-compatible driver
    ===========================================
    <none>
    
    Crypto devices using kernel driver
    ==================================
    <none>
    
    Other Crypto devices
    ====================
    <none>
    
    Eventdev devices using DPDK-compatible driver
    =============================================
    <none>
    
    Eventdev devices using kernel driver
    ====================================
    <none>
    
    Other Eventdev devices
    ======================
    <none>
    
    Mempool devices using DPDK-compatible driver
    ============================================
    <none>
    
    Mempool devices using kernel driver
    ===================================
    <none>
    
    Other Mempool devices
    =====================
    <none>
    
    • ifconfig
    enP2p0s2  Link encap:Ethernet  HWaddr 00:0d:3a:19:f1:30
              UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 B)  TX bytes:1510 (1.5 KB)
    
    eth0      Link encap:Ethernet  HWaddr 00:0d:3a:13:12:24
              inet addr:10.240.0.101  Bcast:10.240.255.255  Mask:255.255.0.0
              inet6 addr: fe80::20d:3aff:fe13:1224/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:430248 errors:0 dropped:0 overruns:0 frame:0
              TX packets:275592 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:468150790 (468.1 MB)  TX bytes:42800771 (42.8 MB)
    
    eth1      Link encap:Ethernet  HWaddr 00:0d:3a:19:f1:30
              inet addr:10.240.254.103  Bcast:10.240.255.255  Mask:255.255.0.0
              inet6 addr: fe80::20d:3aff:fe19:f130/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:2 errors:0 dropped:0 overruns:0 frame:0
              TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:762 (762.0 B)  TX bytes:1510 (1.5 KB)
    
    lo        Link encap:Local Loopback
              inet addr:127.0.0.1  Mask:255.0.0.0
              inet6 addr: ::1/128 Scope:Host
              UP LOOPBACK RUNNING  MTU:65536  Metric:1
              RX packets:178 errors:0 dropped:0 overruns:0 frame:0
              TX packets:178 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:12560 (12.5 KB)  TX bytes:12560 (12.5 KB)
    
    • ethtool -i enP2p0s2
    driver: mlx4_en
    version: 4.0-0
    firmware-version: 2.41.7004
    expansion-rom-version:
    bus-info: 0002:00:02.0
    supports-statistics: yes
    supports-test: yes
    supports-eeprom-access: no
    supports-register-dump: no
    supports-priv-flags: yes
    
    • lspci
    0000:00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (AGP disabled) (rev 03)
    0000:00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 01)
    0000:00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
    0000:00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 02)
    0000:00:08.0 VGA compatible controller: Microsoft Corporation Hyper-V virtual VGA
    0002:00:02.0 Ethernet controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
    

    Any advice and suggestions will be greatly appreciated.

  • Can't run v0.8.1 on both AWS and VMWare - results in RTE_HASH PANIC

    Can't run v0.8.1 on both AWS and VMWare - results in RTE_HASH PANIC

    Hi, We just upgrade nff-go to 0.8.1, but it fails on EC2 and VMWare, we got this panic message.

    EAL: RTE_HASH tailq is already registered
    PANIC in tailqinitfn_rte_hash_tailq():
    Cannot initialize tailq: RTE_HASH
    6: [/opt/glasnostic/bin/router(_start+0x2a) [0x43ed5a]]
    5: [/lib64/libc.so.6(__libc_start_main+0x85) [0x7f6995e42425]]
    4: [/opt/glasnostic/bin/router(__libc_csu_init+0x4d) [0x183890d]]
    3: [/opt/glasnostic/bin/router() [0x43e19c]]
    2: [/opt/glasnostic/bin/router(__rte_panic+0xba) [0x43130e]]
    1: [/opt/glasnostic/bin/router(rte_dump_stack+0x18) [0x148d2b8]]
    Aborted
    

    We also try to downgrade with v0.8.0 tag, everything is working fine with it.

  • No packet received after upgrade to 0.4.1

    No packet received after upgrade to 0.4.1

    Hi, I just upgrade my project from nff-go 0.3 to 0.4.1 but suddenly no packet was received.

    Do you have any idea what happens there? I change nothing in my code - just use the different tag of nff-go.

    Or do you have any suggestion how do I find out the reason?

    This is part of my code:

    func init() error {
    	config := &flow.Config{
    		HWTXChecksum: true,
    	}
    	flow.SystemInit(config)
    	
    	port := getEthPort(n.nicInfo.HWAddress)
    	mainFlow, err := flow.SetReceiver(port)
    	if err != nil {
    		return err
    	}
    	
            // set handler
    	if err := flow.SetHandlerDrop(mainFlow, handler, nil); err != nil {
    		return err
    	}
    	
    	if err := flow.SetSender(mainFlow, port); err != nil {
    		return err
    	}
    
    	go func() {
    		if err := flow.SystemStart(); err != nil {
    			logger.Println("Couldn't start NFF system:", err.Error())
    		}
    	}()
    
    	return nil
    }
    
    func handler(pkt *packetlib.Packet, _ flow.UserContext) bool {
    	nffpacket := newNFFPacket(pkt)
    	// incomming := make(chan NFFPacket)
            incomming <- nffpacket
    	return <-nffpacket.verdict
    }
    
  • Added error handling to some functions.

    Added error handling to some functions.

    Major functions do not panic now, but return error. Made special common error type, changed function calls in tests and examples, fixed name in arp test, magic number in packet utils test. PLEASE read this really huge pull request, pay attention to files that you wrote to make sure I haven't broken anything.

  • DPDK binds to all the available interfaces and it will interrupt connectivity

    DPDK binds to all the available interfaces and it will interrupt connectivity

    How do I whitelist dpdk devices before I run make perf_testing? For the moment, nff-go uses dpdk in a way that it will bind also the management interface and I lose connectivity to the machine.

    Note: I try to run NFF-GO on Azure.

  • Build problem.

    Build problem.

    Hitting an error at top level make. Smells like a GOROOT problem, but I am using a standard Ubuntu 16.04 install. One difference seems to be the Ubuntu install installs to /usr/lib/go-1.9 ran than /usr/local. Spinning up a new VM using the standard location but having issues with our cloud. Seen this before?

    make[2]: Entering directory '/mnt/Code/gocode/src/github.com/intel-go/nff-go/test/framework' Checking for AVX support... AVX and AVX2 go generate stringer: checking package: types.go:8:2: could not import encoding/json (can't find import: ) types.go:12: running "stringer": exit status 1 Makefile:11: recipe for target 'apptype_string.go' failed make[2]: *** [apptype_string.go] Error 1 make[2]: Leaving directory '/mnt/Code/gocode/src/github.com/intel-go/nff-go/test/framework' ../mk/intermediate.mk:13: recipe for target 'framework' failed make[1]: *** [framework] Error 2 make[1]: Leaving directory '/mnt/Code/gocode/src/github.com/intel-go/nff-go/test' mk/intermediate.mk:13: recipe for target 'test' failed make: *** [test] Error 2

    go env GOARCH="amd64" GOBIN="" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/mnt/Code/gocode" GORACE="" GOROOT="/usr/lib/go-1.9" GOTOOLDIR="/usr/lib/go-1.9/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build721947677=/tmp/go-build -gno-record-gcc-switches" CXX="g++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config"

    go version go version go1.9.4 linux/amd64

  • nff-go examples on Kubernetes

    nff-go examples on Kubernetes

    Hello,

    Are there any docs/tutorials for running and testing the nff-go examples on Kubernetes? I can build the images and do my own deployments, but I wanted to check if there is something already done by the community in that direction, as I couldn't find in the documentation/blog posts/etc.

    Thanks, Ivana

  • Any operation to free used memory in Mempool ?

    Any operation to free used memory in Mempool ?

    I write a program to capture the packets and build to flow using 5 tuple.I save the raw bytes of capturing packets and rebuild to pcaket.Packet struct after that. I run the program one night,and the log reminds me that the Mempool has less free space.And the slow operation has stopped in 7875 and no longer changged.

    DEBUG: ---------------
    
    DEBUG: System is using 4 cores now. 0 cores are left available.
    DEBUG: Current speed of 0 instance of segment1 is 920322 PKT/S, cloneNumber: 1 queue number: 8
    DEBUG: Current speed of 1 instance of segment1 is 847226 PKT/S, cloneNumber: 1 queue number: 8
    DEBUG: Mempool usage receive 787 from 8191
    DEBUG: Mempool usage receive1 958 from 8191
    DEBUG: Mempool usage receive2 624 from 8191
    DEBUG: Mempool usage receive3 651 from 8191
    DEBUG: Mempool usage receive4 675 from 8191
    DEBUG: Mempool usage receive5 553 from 8191
    DEBUG: Mempool usage receive6 802 from 8191
    DEBUG: Mempool usage receive7 677 from 8191
    DEBUG: Mempool usage receive8 99 from 8191
    DEBUG: Mempool usage receive9 101 from 8191
    DEBUG: Mempool usage receive10 115 from 8191
    DEBUG: Mempool usage receive11 105 from 8191
    DEBUG: Mempool usage receive12 124 from 8191
    DEBUG: Mempool usage receive13 101 from 8191
    DEBUG: Mempool usage receive14 113 from 8191
    DEBUG: Mempool usage receive15 127 from 8191
    DEBUG: Mempool usage slow operations 7875 from 8191
    DROP: slow operations mempool has less than 10% free space. This can lead to dropping packets while receive.
    ERROR: AllocateMbuf cannot allocate mbuf, dpdk returned:  -105
    New packet Error:, AllocateMbuf cannot allocate mbuf, dpdk returned:  -105 
    

    I wanna ask that what's the reason of runing out of the Mempool memory? All I use the nff-go building struct is packet.Packet by NewPacket() function.Is there any method to release the memory I used? How can I effectively use the Mempool and release in-time to keep my program running?

  • can't write frame size large than 2000 via nff-go with igb_uio in AWS EC2

    can't write frame size large than 2000 via nff-go with igb_uio in AWS EC2

    Hi, I hit a problem that I can't write a jumbo size frame to network interface even I can read it from there on AWS EC2, we just use the default configuration from network interface which set the MTU to 9001 (see Jumbo frame instances )

    • linux kernel version: 4.4.0-142-generic

    • nff-go version: 0.7.0

    • ethtool -i ens6

    driver: ena
    version: 2.0.2K
    firmware-version:
    expansion-rom-version:
    bus-info: 0000:00:06.0
    supports-statistics: yes
    supports-test: no
    supports-eeprom-access: no
    supports-register-dump: no
    supports-priv-flags: no
    
    • ifconfig
    ens5      Link encap:Ethernet  HWaddr 06:01:77:20:8f:c2
              inet addr:10.1.218.18  Bcast:10.1.255.255  Mask:255.255.0.0
              inet6 addr: fe80::401:77ff:fe20:8fc2/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
              RX packets:213 errors:0 dropped:0 overruns:0 frame:0
              TX packets:142 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:23528 (23.5 KB)  TX bytes:20784 (20.7 KB)
    
    ens6      Link encap:Ethernet  HWaddr 06:e4:c5:b9:5e:fc
              inet addr:10.1.189.85  Bcast:10.1.255.255  Mask:255.255.0.0
              inet6 addr: fe80::4e4:c5ff:feb9:5efc/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
              RX packets:6 errors:0 dropped:0 overruns:0 frame:0
              TX packets:17 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:800 (800.0 B)  TX bytes:2126 (2.1 KB)
    
    lo        Link encap:Local Loopback
              inet addr:127.0.0.1  Mask:255.0.0.0
              inet6 addr: ::1/128 Scope:Host
              UP LOOPBACK RUNNING  MTU:65536  Metric:1
              RX packets:211 errors:0 dropped:0 overruns:0 frame:0
              TX packets:211 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1
              RX bytes:15216 (15.2 KB)  TX bytes:15216 (15.2 KB)
    
    • lspci
    00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma]
    00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
    00:01.3 Non-VGA unclassified device: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
    00:03.0 VGA compatible controller: Device 1d0f:1111
    00:04.0 Non-Volatile memory controller: Device 1d0f:8061
    00:05.0 Ethernet controller: Device 1d0f:ec20
    00:06.0 Ethernet controller: Device 1d0f:ec20
    
  • Update latency test, add to test framework

    Update latency test, add to test framework

    • Calculate median, average and stddev latency
    • Add writing statistics to file
    • Modify test system to copy created output file from container to log dir
    • Add timout to stabilize speed
    • Use FastGenerator
    • Remove latency-stub.go as unnecessary (perf_light and others should be used as part2)
    • Add more tests to latency.json
    • Add noscheduler flag
  • Kernel Panic after closing

    Kernel Panic after closing

    Hi, I'm using nff-go and facing a kernel panic problem.

    Here is a sample code:

    https://gist.github.com/mkfsn/139274a9d0368c76c86f13ea0aa41fcc

    The devices package is similar to devbind script in DPDK, for binding and unbinding driver (e.g. virtio_net, uio_pci_generic)

    I use iperf3 to send packets and if I close the nff runner (including unbinding uio driver, binding to original kernel driver, and bringing up the network device) while sending packets, kernel sometimes panic:

    Apr 16 07:21:27 ubuntu kernel: [15074.868458] virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver
    Apr 16 07:21:27 ubuntu kernel: [15074.882968] net eth1: netmap queues/slots: TX 1/256, RX 1/256
    Apr 16 07:21:27 ubuntu kernel: [15074.882974] 287.301852 [ 746] virtio_netmap_attach      virtio attached txq=1, txd=256 rxq=1, rxd=256
    Apr 16 07:21:27 ubuntu kernel: [15074.920581] BUG: unable to handle kernel NULL pointer dereference at 000000000000000d
    Apr 16 07:21:27 ubuntu kernel: [15074.920627] IP: [<000000000000000d>] 0xd
    Apr 16 07:21:27 ubuntu kernel: [15074.920649] PGD 0
    Apr 16 07:21:27 ubuntu kernel: [15074.920661] Oops: 0010 [#1] SMP
    Apr 16 07:21:27 ubuntu kernel: [15074.920679] Modules linked in: nfnetlink_queue nfnetlink xt_NFQUEUE xt_NETMAP iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle xt_mark uio_pci_generic uio ip6table_filter ip6_tables iptable_filter ip_tables x_tables cirrus hid_generic ttm usbhid drm_kms_helper hid joydev psmouse crct10dif_pclmul crc32_pclmul drm ghash_clmulni_intel ppdev aesni_intel fb_sys_fops aes_x86_64 syscopyarea lrw gf128mul glue_helper input_leds floppy ablk_helper serio_raw cryptd sysfillrect parport_pc sysimgblt parport i2c_piix4 8250_fintek pata_acpi mac_hid virtio_net(O) netmap(O) autofs4
    Apr 16 07:21:27 ubuntu kernel: [15074.921002] CPU: 2 PID: 2864 Comm: nff Tainted: G           O    4.4.98 #1
    Apr 16 07:21:27 ubuntu kernel: [15074.921035] Hardware name: OpenStack Foundation OpenStack Nova, BIOS 1.10.2-1ubuntu1~cloud0 04/01/2014
    Apr 16 07:21:27 ubuntu kernel: [15074.921078] task: ffff880236272a00 ti: ffff880037f90000 task.ti: ffff880037f90000
    Apr 16 07:21:27 ubuntu kernel: [15074.921113] RIP: 0010:[<000000000000000d>]  [<000000000000000d>] 0xd
    Apr 16 07:21:27 ubuntu kernel: [15074.921144] RSP: 0018:ffff880037f93c18  EFLAGS: 00010202
    Apr 16 07:21:27 ubuntu kernel: [15074.921169] RAX: 000000000000000d RBX: ffff880234c8c9e0 RCX: 0000000000000001
    Apr 16 07:21:27 ubuntu kernel: [15074.921202] RDX: ffff88021dbff800 RSI: ffff8802349ee1a8 RDI: ffff88021dbff800
    Apr 16 07:21:27 ubuntu kernel: [15074.921235] RBP: ffff880037f93c38 R08: 0000000000000000 R09: 0000000000000000
    Apr 16 07:21:27 ubuntu kernel: [15074.921268] R10: ffff8802349ee1a8 R11: ffff880235618210 R12: ffff88021d94b698
    Apr 16 07:21:27 ubuntu kernel: [15074.921301] R13: 0000000000000000 R14: ffff8802362fe620 R15: ffff880043365d80
    Apr 16 07:21:27 ubuntu kernel: [15074.921335] FS:  0000000000000000(0000) GS:ffff88023fd00000(0000) knlGS:0000000000000000
    Apr 16 07:21:27 ubuntu kernel: [15074.921372] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Apr 16 07:21:27 ubuntu kernel: [15074.921399] CR2: 000000000000000d CR3: 0000000001e0a000 CR4: 00000000003406e0
    Apr 16 07:21:27 ubuntu kernel: [15074.921436] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    Apr 16 07:21:27 ubuntu kernel: [15074.921469] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
    Apr 16 07:21:27 ubuntu kernel: [15074.921502] Stack:
    Apr 16 07:21:27 ubuntu kernel: [15074.921512]  ffffffffc01924c4 ffff880235618200 0000000000000008 ffff8802349ee1a8
    Apr 16 07:21:27 ubuntu kernel: [15074.921551]  ffff880037f93c80 ffffffff812120a4 ffff8802349ee1a8 ffff880235618210
    Apr 16 07:21:27 ubuntu kernel: [15074.921589]  ffff880236272a00 ffffffff82109d50 ffff880235618200 ffff8802352d0800
    Apr 16 07:21:27 ubuntu kernel: [15074.921629] Call Trace:
    Apr 16 07:21:27 ubuntu kernel: [15074.921645]  [<ffffffffc01924c4>] ? uio_release+0x34/0x60 [uio]
    Apr 16 07:21:27 ubuntu kernel: [15074.922814]  [<ffffffff812120a4>] __fput+0xe4/0x220
    Apr 16 07:21:27 ubuntu kernel: [15074.924372]  [<ffffffff8121221e>] ____fput+0xe/0x10
    Apr 16 07:21:27 ubuntu kernel: [15074.925588]  [<ffffffff8109ef71>] task_work_run+0x81/0xa0
    Apr 16 07:21:27 ubuntu kernel: [15074.926727]  [<ffffffff81083e91>] do_exit+0x2e1/0xb00
    Apr 16 07:21:27 ubuntu kernel: [15074.927870]  [<ffffffff8183de97>] ? wait_for_completion+0x37/0x140
    Apr 16 07:21:27 ubuntu kernel: [15074.929022]  [<ffffffff81084733>] do_group_exit+0x43/0xb0
    Apr 16 07:21:27 ubuntu kernel: [15074.930160]  [<ffffffff81090992>] get_signal+0x292/0x600
    Apr 16 07:21:27 ubuntu kernel: [15074.931281]  [<ffffffff8102e567>] do_signal+0x37/0x6f0
    Apr 16 07:21:27 ubuntu kernel: [15074.932386]  [<ffffffff8112e484>] ? kprobe_flush_task+0x94/0x130
    Apr 16 07:21:27 ubuntu kernel: [15074.933513]  [<ffffffff810a9c72>] ? finish_task_switch+0x1b2/0x220
    Apr 16 07:21:27 ubuntu kernel: [15074.934664]  [<ffffffff8183ce06>] ? __schedule+0x3b6/0xa30
    Apr 16 07:21:27 ubuntu kernel: [15074.935808]  [<ffffffff8100320c>] exit_to_usermode_loop+0x8c/0xd0
    Apr 16 07:21:27 ubuntu kernel: [15074.936920]  [<ffffffff81003c16>] prepare_exit_to_usermode+0x26/0x30
    Apr 16 07:21:27 ubuntu kernel: [15074.938007]  [<ffffffff818420e5>] retint_user+0x8/0x10
    Apr 16 07:21:27 ubuntu kernel: [15074.939100] Code:  Bad RIP value.
    Apr 16 07:21:27 ubuntu kernel: [15074.940185] RIP  [<000000000000000d>] 0xd
    Apr 16 07:21:27 ubuntu kernel: [15074.941276]  RSP <ffff880037f93c18>
    Apr 16 07:21:27 ubuntu kernel: [15074.942338] CR2: 000000000000000d
    Apr 16 07:21:27 ubuntu kernel: [15074.945828] ---[ end trace fb8bf404b90ddb84 ]---
    Apr 16 07:21:27 ubuntu kernel: [15074.946862] Fixing recursive fault but reboot is needed!
    

    I didn't get any error when closing but "stopped successfully".

    I would appreciate if you have any suggestions or ideas.

  • Support >255 flows

    Support >255 flows

    https://github.com/intel-go/nff-go/blob/master/flow/flow.go#L1161 Casting the uint flowNumber to uint8 limits us to 255 paths/flows. I'd love to support (many) more.

    Has anyone tried doing this?

    It looks like perhaps the fix is limited to flow.go? Perhaps simply changing the uint8 -> uint in that file?

    Or am I missing something?

  • Segmentation Fault - IP Reassembly

    Segmentation Fault - IP Reassembly

    Hello, I am working on a project, where IP Reassembly of traffic is required and I am using nff-go for this purpose. I am having a problem while running my application. I tried using debugger and every time my application is breaking at point "rte_ip_frag_free_death_row" with an error "Segmentation Fault". This means, it tries to access the memory that doesn't exist. Can you please guide me how can I resolve this isuse?

  • Updating to support newer versions of DPDK

    Updating to support newer versions of DPDK

    I've been working with nff-go to make it function correctly with dpdk-20.11.1 (which is critical for support of e810 cards and certain functions in them) - but I've run into an issue and potentially being a little stupid.

    in low.go - there is a reference in setMbufLen to mb.anon5[0] -> mb.anon5[7] - and for the life of me I cannot seem to figure out how this maps back to MBuf structure in dpdk itself - since that anon struct member doesn't seem to exist in any version of dpdk that I've checked all the way back to 18 - what I do know is - this kicks out entirely on the later versions of dpdk saying that struct member doesn't exist. Anyone got any idea how this maps back so I can modify accordingly?

  • Unable to compile on Ubuntu 20.04LTS

    Unable to compile on Ubuntu 20.04LTS

    During the compilation process the kni kernel module building fails. It seems that the DPDK 19.08 is not supported for Ubuntu 20.04.

    Is there any plans to upgrade the DPDK version for the repository?

  • panic in rte_eth_tx_burst - how to manage thread safety?

    panic in rte_eth_tx_burst - how to manage thread safety?

    Hi,

    I am using nff-go with the netvsc DPDK driver (for Hyper-V) with two ports. In order to respond to ARP and ICMP requests, I am using DealARPICMP.

    It appears that if an ARP response (which is sent in handleARPICMPRequests using answerPacket.SendPacket) cooincides with an outgoing packet being sent by the flow graph, this causes a panic (SIGSEGV) in rte_eth_tx_burst.

    I've read various articles (e.g. http://mails.dpdk.org/archives/dev/2014-January/001077.html) that rte_eth_tx_burst is not thread safe (using the same port and queue). Also, the Intel documentation says...

    'If multiple threads are to use the same hardware queue on the same NIC port, then locking, or some other form of mutual exclusion, is necessary.'

    How can I avoid this crash and coordinate the calls to rte_eth_tx_burst between nff_go_send and directSend?

    I can synchronize the calls to directSend by using my own implementation of DealARPICMP - but seemingly can't avoid collisions with nff_go_send.

    Thanks,

    Mike

    Edited to add relevant stack trace:

    [signal SIGSEGV: segmentation violation code=0x1 addr=0xc pc=0xa25660]

    runtime stack: runtime.throw(0xc19c64, 0x2a) /usr/local/go/src/runtime/panic.go:1117 +0x72 runtime.sigpanic() /usr/local/go/src/runtime/signal_unix.go:718 +0x2e5

    goroutine 37 [syscall, locked to thread]: runtime.cgocall(0x863a30, 0xc000317928, 0xc000317938) /usr/local/go/src/runtime/cgocall.go:154 +0x5b fp=0xc0003178f8 sp=0xc0003178c0 pc=0x4dfd9b github.com/intel-go/nff-go/internal/low._Cfunc_directSend(0x12d0a9fc0, 0x12d0a0000, 0x0) _cgo_gotypes.go:572 +0x45 fp=0xc000317928 sp=0xc0003178f8 pc=0x7e19a5 github.com/intel-go/nff-go/internal/low.DirectSend.func1(0x12d0a9fc0, 0x0, 0xc00031a170) /home/mike/upf/nff-go/internal/low/low.go:95 +0x57 fp=0xc000317958 sp=0xc000317928 pc=0x7e4ed7 github.com/intel-go/nff-go/internal/low.DirectSend(0x12d0a9fc0, 0x9ed806524e5d0000, 0x6f4ea8c0dd6193f3) /home/mike/upf/nff-go/internal/low/low.go:95 +0x35 fp=0xc000317980 sp=0xc000317958 pc=0x7e2b15 github.com/intel-go/nff-go/packet.(*Packet).SendPacket(...) /home/mike/upf/nff-go/packet/packet.go:848 main.handleARP(0x1170b34ce, 0xc00021e108, 0x1e00a10) /home/mike/upf/main.go:114 +0x237 fp=0xc0003179f8 sp=0xc000317980 pc=0x85d757 main.handleCorePacket(0x1170b3440, 0xc87c90, 0xc00021e108, 0x3c0000003f) /home/mike/upf/main.go:194 +0x115 fp=0xc000317a20 sp=0xc0003179f8 pc=0x85dd75 github.com/intel-go/nff-go/flow.separate(0x1170b3440, 0xc000226310, 0xc87c90, 0xc00021e108, 0x3) /home/mike/upf/nff-go/flow/flow.go:1796 +0x48 fp=0xc000317a50 sp=0xc000317a20 pc=0x7f1408 github.com/intel-go/nff-go/flow.segmentProcess(0xb7b720, 0xc0002045a0, 0xc000184140, 0x11, 0x11, 0xc0001a0120, 0xc0001a0180, 0xc0001a8600, 0xc000310000, 0x3, ...) /home/mike/upf/nff-go/flow/flow.go:1466 +0x4d9 fp=0xc000317ef0 sp=0xc000317a50 pc=0x7f01f9 github.com/intel-go/nff-go/flow.(*instance).startNewClone.func1(0xc000228780, 0x5, 0xc00018e900) /home/mike/upf/nff-go/flow/scheduler.go:289 +0x25e fp=0xc000317fc8 sp=0xc000317ef0 pc=0x7f77be runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:1371 +0x1 fp=0xc000317fd0 sp=0xc000317fc8 pc=0x5485e1 created by github.com/intel-go/nff-go/flow.(*instance).startNewClone /home/mike/upf/nff-go/flow/scheduler.go:283 +0x2c5

Magma is an open-source software platform that gives network operators an open, flexible and extendable mobile core network solution.
Magma is an open-source software platform that gives network operators an open, flexible and extendable mobile core network solution.

Connecting the Next Billion People Magma is an open-source software platform that gives network operators an open, flexible and extendable mobile core

Dec 31, 2022
Optimize Windows's network/NIC driver settings for NewTek's NDI(Network-Device-Interface).

windows-ndi-optimizer[WIP] Optimize Windows's network/NIC driver settings for NewTek's NDI(Network-Device-Interface). How it works This is batchfile d

Apr 15, 2022
A simple network analyzer that capture http network traffic
A simple network analyzer that capture http network traffic

httpcap A simple network analyzer that captures http network traffic. support Windows/MacOS/Linux/OpenWrt(x64) https only capture clienthello colorful

Oct 25, 2022
Zero Trust Network Communication Sentinel provides peer-to-peer, multi-protocol, automatic networking, cross-CDN and other features for network communication.
Zero Trust Network Communication Sentinel provides peer-to-peer, multi-protocol, automatic networking, cross-CDN and other features for network communication.

Thank you for your interest in ZASentinel ZASentinel helps organizations improve information security by providing a better and simpler way to protect

Nov 1, 2022
Extensible network application framework inspired by netty

GO-NETTY 中文介绍 Introduction go-netty is heavily inspired by netty Feature Extensible transport support, default support TCP, UDP, QUIC, KCP, Websocket

Dec 28, 2022
Go network programming framework, supports multiplexing, synchronous and asynchronous IO mode, modular design, and provides flexible custom interfaces
Go network programming framework, supports multiplexing, synchronous and asynchronous IO mode, modular design, and provides flexible custom interfaces

Go network programming framework, supports multiplexing, synchronous and asynchronous IO mode, modular design, and provides flexible custom interfaces。The key is the transport layer, application layer protocol has nothing to do

Nov 7, 2022
Powerful golang network framework, supporting FFAX Protocol

X.NET framework Install $ go get github.com/RealFax/XNET This is a high-performance network framework, currently only supports tcp and FFAX protocol U

Nov 19, 2021
A function for chaos testing with OpenFaaS

chaos-fn A function for chaos testing with OpenFaaS Use-cases Test retries on certain HTTP codes Test timeouts Test certain lengths of HTTP request bo

May 26, 2022
idk, i'm just passed the function test case :D

Challenge Create an API that crawl links from given URL using GoLang or NodeJS. You can use any framework or libraries but your program should be can

Dec 23, 2021
Provides the function Parallel to create a synchronous in memory pipe and lets you write to and read from the pipe parallelly

iopipe provides the function Parallel to create a synchronous in memory pipe and lets you write to and read from the pipe parallely

Jan 25, 2022
🚀Gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily build high-performance servers.
🚀Gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily build high-performance servers.

gev 中文 | English gev is a lightweight, fast non-blocking TCP network library based on Reactor mode. Support custom protocols to quickly and easily bui

Jan 6, 2023
gNXI Tools - gRPC Network Management/Operations Interface Tools

gNxI Tools gNMI - gRPC Network Management Interface gNOI - gRPC Network Operations Interface A collection of tools for Network Management that use the

Dec 15, 2022
Simulate network link speed

linkio linkio provides an io.Reader and io.Writer that simulate a network connection of a certain speed, e.g. to simulate a mobile connection. Quick s

Sep 27, 2022
Send network packets over a TCP or UDP connection.

Packet is the main class representing a single network message. It has a byte code indicating the type of the message and a []byte type payload.

Nov 28, 2022
A cloud native distributed streaming network telemetry.
A cloud native distributed streaming network telemetry.

Panoptes Streaming Panoptes Streaming is a cloud native distributed streaming network telemetry. It can be installed as a single binary or clustered n

Sep 27, 2022
Package raw enables reading and writing data at the device driver level for a network interface. MIT Licensed.

raw Package raw enables reading and writing data at the device driver level for a network interface. MIT Licensed. For more information about using ra

Dec 28, 2022
:alarm_clock: :fire: A TCP proxy to simulate network and system conditions for chaos and resiliency testing
:alarm_clock: :fire: A TCP proxy to simulate network and system conditions for chaos and resiliency testing

Toxiproxy Toxiproxy is a framework for simulating network conditions. It's made specifically to work in testing, CI and development environments, supp

Jan 7, 2023
Dockin CNI - Dockin Container Network Interface

Dockin CNI - Dockin Container Network Interface

Aug 12, 2022
An imageboard, but images are stored in a peer-to-peer network
An imageboard, but images are stored in a peer-to-peer network

Interplanetary File Dumpster An imageboard, but images are stored in a peer-to-peer network Features: Easy file sharing without registration and SMS.

Sep 30, 2022