The Storage Performance Development Kit iSCSI target application is named
iscsi_tgt. This following section describes how to run iscsi from your cloned package.
This guide starts by assuming that you can already build the standard SPDK distribution on your platform.
Once built, the binary will be in
If you want to kill the application by using signal, make sure use the SIGTERM, then the application will release all the shared memory resource before exit, the SIGKILL will make the shared memory resource have no chance to be released by applications, you may need to release the resource manually.
The following diagram shows relations between different parts of iSCSI structure described in this document.
iscsi_tgt specific configuration file is used to configure the iSCSI target. A fully documented example configuration file is located at
The configuration file is used to configure the SPDK iSCSI target. This file defines the following: TCP ports to use as iSCSI portals; general iSCSI parameters; initiator names and addresses to allow access to iSCSI target nodes; number and types of storage backends to export over iSCSI LUNs; iSCSI target node mappings between portal groups, initiator groups, and LUNs.
You should make a copy of the example configuration file, modify it to suit your environment, and then run the iscsi_tgt application and pass it the configuration file using the -c option. Right now, the target requires elevated privileges (root) to run.
SPDK uses the DPDK Environment Abstraction Layer to gain access to hardware resources such as huge memory pages and CPU core(s). DPDK EAL provides functions to assign threads to specific cores. To ensure the SPDK iSCSI target has the best performance, place the NICs and the NVMe devices on the same NUMA node and configure the target to run on CPU cores associated with that node. The following command line option is used to configure the SPDK iSCSI target:
This is a hexadecimal bit mask of the CPU cores where the iSCSI target will start polling threads. In this example, CPU cores 24, 25, 26 and 27 would be used.
Each LUN in an iSCSI target node is associated with an SPDK block device. See Block Device User Guide for details on configuring SPDK block devices. The block device to LUN mappings are specified in the configuration file as:
This exports a malloc'd target. The disk is a RAM disk that is a chunk of memory allocated by iscsi in user space. It will use offload engine to do the copy job instead of memcpy if the system has enough DMA channels.
In addition to the configuration file, the iSCSI target may also be configured via JSON-RPC calls. See JSON-RPC Methods for details.
The Linux initiator is open-iscsi.
Installing open-iscsi package Fedora:
iscsid must be restarted or receive SIGHUP for changes to take effect. To send SIGHUP, run:
Recommended changes to /etc/sysctl.conf
Assume target is at 10.0.0.1
At this point the iSCSI target should show up as SCSI disks. Check dmesg to see what they came up as.
This will cause the initiator to forget all previously discovered iSCSI target nodes.
This will show the /dev node name for each SCSI LUN in all logged in iSCSI sessions.
After the targets are connected, they can be tuned. For example if /dev/sdc is an iSCSI disk then the following can be done: Set noop to scheduler
Disable merging/coalescing (can be useful for precise workload measurements)
Increase requests for block queue
Assuming we have one iSCSI Target server with portal at 10.0.0.1:3200, two LUNs (Malloc0 and Malloc), and accepting initiators on 10.0.0.2/32, like on diagram below:
Start iscsi_tgt application:
Construct two 64MB Malloc block devices with 512B sector size "Malloc0" and "Malloc1":
Create new portal group with id 1, and address 10.0.0.1:3260:
Create one initiator group with id 2 to accept any connection from 10.0.0.2/32:
Finaly construct one target using previously created bdevs as LUN0 (Malloc0) and LUN1 (Malloc1) with a name "disk1" and alias "Data Disk1" using portal group 1 and initiator group 2.
Connect to the target
At this point the iSCSI target should show up as SCSI disks.
Check dmesg to see what they came up as. In this example it can look like below:
You may also use simple bash command to find /dev/sdX nodes for each iSCSI LUN in all logged iSCSI sessions:
A detailed instructions for simplified steps 1-3 below, can be found on VPP Quick Start Guide.
SPDK supports VPP version 18.01.1.
Please skip this step if using already built packages.
Clone and checkout VPP
Install VPP build dependencies
Build and create .rpm packages
Alternatively, build and create .deb packages
Packages can be found in
For more in depth instructions please see Building section in VPP documentation
Please note: VPP 18.01.1 does not support OpenSSL 1.1. It is suggested to install a compatibility package for compilation time.
Then reinstall latest OpenSSL devel package:
Packages can be installed from distribution repository or built in previous step. Minimal set of packages consists of
Note: Please remove or modify /etc/sysctl.d/80-vpp.conf file with appropriate values dependent on number of hugepages that will be used on system.
VPP takes over any network interfaces that were bound to userspace driver, for details please see DPDK guide on Binding and Unbinding Network Ports to/from the Kernel Modules.
VPP is installed as service and disabled by default. To start VPP with default config:
vpp binary directly
A usefull tool is
vppctl, that allows to control running VPP instance. Either by entering VPP configuration prompt
Or, by sending single command directly. For example to display interfaces within VPP:
For functional test purpose a virtual tap interface can be created, so no additional network hardware is required. This will allow network communication between SPDK iSCSI target using VPP end of tap and kernel iSCSI initiator using the kernel part of tap. A single host is used in this scenario.
Create tap interface via VPP
Assign address on kernel interface
To verify connectivity
Support for VPP can be built into SPDK by using configuration option.
Alternatively, directory with built libraries can be pointed at and will be used for compilation instead of installed packages.
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method.
At the iSCSI level, we provide the following support for Hotplug:
For write command, if you want to test hotplug with write command which will cause r2t, for example 1M size IO, it will crash the iscsi tgt. For read command, if you want to test hotplug with large read IO, for example 1M size IO, it will probably crash the iscsi tgt.