__  __    __   __  _____      _            _          _____ _          _ _ 
 |  \/  |   \ \ / / |  __ \    (_)          | |        / ____| |        | | |
 | \  / |_ __\ V /  | |__) | __ ___   ____ _| |_ ___  | (___ | |__   ___| | |
 | |\/| | '__|> <   |  ___/ '__| \ \ / / _` | __/ _ \  \___ \| '_ \ / _ \ | |
 | |  | | |_ / . \  | |   | |  | |\ V / (_| | ||  __/  ____) | | | |  __/ | |
 |_|  |_|_(_)_/ \_\ |_|   |_|  |_| \_/ \__,_|\__\___| |_____/|_| |_|\___V 2.1
 if you need WebShell for Seo everyday contact me on Telegram
 Telegram Address : @jackleet
        
        
For_More_Tools: Telegram: @jackleet | Bulk Smtp support mail sender | Business Mail Collector | Mail Bouncer All Mail | Bulk Office Mail Validator | Html Letter private



Upload:

Command:

www-data@216.73.216.10: ~ $
Demonstrations of biolatency, the Linux eBPF/bcc version.


biolatency traces block device I/O (disk I/O), and records the distribution
of I/O latency (time), printing this as a histogram when Ctrl-C is hit.
For example:

# ./biolatency
Tracing block device I/O... Hit Ctrl-C to end.
^C
     usecs           : count     distribution
       0 -> 1        : 0        |                                      |
       2 -> 3        : 0        |                                      |
       4 -> 7        : 0        |                                      |
       8 -> 15       : 0        |                                      |
      16 -> 31       : 0        |                                      |
      32 -> 63       : 0        |                                      |
      64 -> 127      : 1        |                                      |
     128 -> 255      : 12       |********                              |
     256 -> 511      : 15       |**********                            |
     512 -> 1023     : 43       |*******************************       |
    1024 -> 2047     : 52       |**************************************|
    2048 -> 4095     : 47       |**********************************    |
    4096 -> 8191     : 52       |**************************************|
    8192 -> 16383    : 36       |**************************            |
   16384 -> 32767    : 15       |**********                            |
   32768 -> 65535    : 2        |*                                     |
   65536 -> 131071   : 2        |*                                     |

The latency of the disk I/O is measured from the issue to the device to its
completion. A -Q option can be used to include time queued in the kernel.

This example output shows a large mode of latency from about 128 microseconds
to about 32767 microseconds (33 milliseconds). The bulk of the I/O was
between 1 and 8 ms, which is the expected block device latency for
rotational storage devices.

The highest latency seen while tracing was between 65 and 131 milliseconds:
the last row printed, for which there were 2 I/O.

For efficiency, biolatency uses an in-kernel eBPF map to store timestamps
with requests, and another in-kernel map to store the histogram (the "count")
column, which is copied to user-space only when output is printed. These
methods lower the performance overhead when tracing is performed.


In the following example, the -m option is used to print a histogram using
milliseconds as the units (which eliminates the first several rows), -T to
print timestamps with the output, and to print 1 second summaries 5 times:

# ./biolatency -mT 1 5
Tracing block device I/O... Hit Ctrl-C to end.

06:20:16
     msecs           : count     distribution
       0 -> 1        : 36       |**************************************|
       2 -> 3        : 1        |*                                     |
       4 -> 7        : 3        |***                                   |
       8 -> 15       : 17       |*****************                     |
      16 -> 31       : 33       |**********************************    |
      32 -> 63       : 7        |*******                               |
      64 -> 127      : 6        |******                                |

06:20:17
     msecs           : count     distribution
       0 -> 1        : 96       |************************************  |
       2 -> 3        : 25       |*********                             |
       4 -> 7        : 29       |***********                           |
       8 -> 15       : 62       |***********************               |
      16 -> 31       : 100      |**************************************|
      32 -> 63       : 62       |***********************               |
      64 -> 127      : 18       |******                                |

06:20:18
     msecs           : count     distribution
       0 -> 1        : 68       |*************************             |
       2 -> 3        : 76       |****************************          |
       4 -> 7        : 20       |*******                               |
       8 -> 15       : 48       |*****************                     |
      16 -> 31       : 103      |**************************************|
      32 -> 63       : 49       |******************                    |
      64 -> 127      : 17       |******                                |

06:20:19
     msecs           : count     distribution
       0 -> 1        : 522      |*************************************+|
       2 -> 3        : 225      |****************                      |
       4 -> 7        : 38       |**                                    |
       8 -> 15       : 8        |                                      |
      16 -> 31       : 1        |                                      |

06:20:20
     msecs           : count     distribution
       0 -> 1        : 436      |**************************************|
       2 -> 3        : 106      |*********                             |
       4 -> 7        : 34       |**                                    |
       8 -> 15       : 19       |*                                     |
      16 -> 31       : 1        |                                      |

How the I/O latency distribution changes over time can be seen.



The -Q option begins measuring I/O latency from when the request was first
queued in the kernel, and includes queuing latency:

# ./biolatency -Q
Tracing block device I/O... Hit Ctrl-C to end.
^C
     usecs           : count     distribution
       0 -> 1        : 0        |                                      |
       2 -> 3        : 0        |                                      |
       4 -> 7        : 0        |                                      |
       8 -> 15       : 0        |                                      |
      16 -> 31       : 0        |                                      |
      32 -> 63       : 0        |                                      |
      64 -> 127      : 0        |                                      |
     128 -> 255      : 3        |*                                     |
     256 -> 511      : 37       |**************                        |
     512 -> 1023     : 30       |***********                           |
    1024 -> 2047     : 18       |*******                               |
    2048 -> 4095     : 22       |********                              |
    4096 -> 8191     : 14       |*****                                 |
    8192 -> 16383    : 48       |*******************                   |
   16384 -> 32767    : 96       |**************************************|
   32768 -> 65535    : 31       |************                          |
   65536 -> 131071   : 26       |**********                            |
  131072 -> 262143   : 12       |****                                  |

This better reflects the latency suffered by the application (if it is
synchronous I/O), whereas the default mode without kernel queueing better
reflects the performance of the device.

Note that the storage device (and storage device controller) usually have
queues of their own, which are always included in the latency, with or
without -Q.


The -D option will print a histogram per disk. Eg:

# ./biolatency -D
Tracing block device I/O... Hit Ctrl-C to end.
^C

Bucket disk = 'xvdb'
     usecs               : count     distribution
         0 -> 1          : 0        |                                        |
         2 -> 3          : 0        |                                        |
         4 -> 7          : 0        |                                        |
         8 -> 15         : 0        |                                        |
        16 -> 31         : 0        |                                        |
        32 -> 63         : 0        |                                        |
        64 -> 127        : 0        |                                        |
       128 -> 255        : 1        |                                        |
       256 -> 511        : 33       |**********************                  |
       512 -> 1023       : 36       |************************                |
      1024 -> 2047       : 58       |****************************************|
      2048 -> 4095       : 51       |***********************************     |
      4096 -> 8191       : 21       |**************                          |
      8192 -> 16383      : 2        |*                                       |

Bucket disk = 'xvdc'
     usecs               : count     distribution
         0 -> 1          : 0        |                                        |
         2 -> 3          : 0        |                                        |
         4 -> 7          : 0        |                                        |
         8 -> 15         : 0        |                                        |
        16 -> 31         : 0        |                                        |
        32 -> 63         : 0        |                                        |
        64 -> 127        : 0        |                                        |
       128 -> 255        : 1        |                                        |
       256 -> 511        : 38       |***********************                 |
       512 -> 1023       : 42       |*************************               |
      1024 -> 2047       : 66       |****************************************|
      2048 -> 4095       : 40       |************************                |
      4096 -> 8191       : 14       |********                                |

Bucket disk = 'xvda1'
     usecs               : count     distribution
         0 -> 1          : 0        |                                        |
         2 -> 3          : 0        |                                        |
         4 -> 7          : 0        |                                        |
         8 -> 15         : 0        |                                        |
        16 -> 31         : 0        |                                        |
        32 -> 63         : 0        |                                        |
        64 -> 127        : 0        |                                        |
       128 -> 255        : 0        |                                        |
       256 -> 511        : 18       |**********                              |
       512 -> 1023       : 67       |*************************************   |
      1024 -> 2047       : 35       |*******************                     |
      2048 -> 4095       : 71       |****************************************|
      4096 -> 8191       : 65       |************************************    |
      8192 -> 16383      : 65       |************************************    |
     16384 -> 32767      : 20       |***********                             |
     32768 -> 65535      : 7        |***                                     |

This output sows that xvda1 has much higher latency, usually between 0.5 ms
and 32 ms, whereas xvdc is usually between 0.2 ms and 4 ms.


The -F option prints a separate histogram for each unique set of request
flags. For example:

./biolatency.py -Fm
Tracing block device I/O... Hit Ctrl-C to end.
^C

flags = Read
     msecs               : count     distribution
         0 -> 1          : 180      |*************                           |
         2 -> 3          : 519      |****************************************|
         4 -> 7          : 60       |****                                    |
         8 -> 15         : 123      |*********                               |
        16 -> 31         : 68       |*****                                   |
        32 -> 63         : 0        |                                        |
        64 -> 127        : 2        |                                        |
       128 -> 255        : 12       |                                        |
       256 -> 511        : 0        |                                        |
       512 -> 1023       : 1        |                                        |

flags = Sync-Write
     msecs               : count     distribution
         0 -> 1          : 5        |****************************************|

flags = Flush
     msecs               : count     distribution
         0 -> 1          : 2        |****************************************|

flags = Metadata-Read
     msecs               : count     distribution
         0 -> 1          : 3        |****************************************|
         2 -> 3          : 2        |**************************              |
         4 -> 7          : 0        |                                        |
         8 -> 15         : 1        |*************                           |
        16 -> 31         : 1        |*************                           |

flags = Write
     msecs               : count     distribution
         0 -> 1          : 103      |*******************************         |
         2 -> 3          : 106      |********************************        |
         4 -> 7          : 130      |****************************************|
         8 -> 15         : 79       |************************                |
        16 -> 31         : 5        |*                                       |
        32 -> 63         : 0        |                                        |
        64 -> 127        : 0        |                                        |
       128 -> 255        : 0        |                                        |
       256 -> 511        : 1        |                                        |

flags = NoMerge-Read
     msecs               : count     distribution
         0 -> 1          : 0        |                                        |
         2 -> 3          : 5        |****************************************|
         4 -> 7          : 0        |                                        |
         8 -> 15         : 0        |                                        |
        16 -> 31         : 1        |********                                |

flags = NoMerge-Write
     msecs               : count     distribution
         0 -> 1          : 30       |**                                      |
         2 -> 3          : 293      |********************                    |
         4 -> 7          : 564      |****************************************|
         8 -> 15         : 463      |********************************        |
        16 -> 31         : 21       |*                                       |
        32 -> 63         : 0        |                                        |
        64 -> 127        : 0        |                                        |
       128 -> 255        : 0        |                                        |
       256 -> 511        : 5        |                                        |

flags = Priority-Metadata-Read
     msecs               : count     distribution
         0 -> 1          : 1        |****************************************|
         2 -> 3          : 0        |                                        |
         4 -> 7          : 1        |****************************************|
         8 -> 15         : 1        |****************************************|

flags = ForcedUnitAccess-Metadata-Sync-Write
     msecs               : count     distribution
         0 -> 1          : 2        |****************************************|

flags = ReadAhead-Read
     msecs               : count     distribution
         0 -> 1          : 15       |***************************             |
         2 -> 3          : 22       |****************************************|
         4 -> 7          : 14       |*************************               |
         8 -> 15         : 8        |**************                          |
        16 -> 31         : 1        |*                                       |

flags = Priority-Metadata-Write
     msecs               : count     distribution
         0 -> 1          : 9        |****************************************|

These can be handled differently by the storage device, and this mode lets us
examine their performance in isolation.


The -e option shows extension summary(total, average)
For example:
# ./biolatency.py -e
^C
     usecs               : count     distribution
         0 -> 1          : 0        |                                        |
         2 -> 3          : 0        |                                        |
         4 -> 7          : 0        |                                        |
         8 -> 15         : 0        |                                        |
        16 -> 31         : 0        |                                        |
        32 -> 63         : 0        |                                        |
        64 -> 127        : 4        |***********                             |
       128 -> 255        : 2        |*****                                   |
       256 -> 511        : 4        |***********                             |
       512 -> 1023       : 14       |****************************************|
      1024 -> 2047       : 0        |                                        |
      2048 -> 4095       : 1        |**                                      |

avg = 663 usecs, total: 16575 usecs, count: 25

Sometimes 512 -> 1023 usecs is not enough for throughput tuning.
Especially a little difference in performance downgrade.
By this extension, we know the value in log2 range is about 663 usecs.


The -j option prints a dictionary of the histogram.
For example:

# ./biolatency.py -j
^C
{'ts': '2020-12-30 14:33:03', 'val_type': 'usecs', 'data': [{'interval-start': 0, 'interval-end': 1, 'count': 0}, {'interval-start': 2, 'interval-end': 3, 'count': 0}, {'interval-start': 4, 'interval-end': 7, 'count': 0}, {'interval-start': 8, 'interval-end': 15, 'count': 0}, {'interval-start': 16, 'interval-end': 31, 'count': 0}, {'interval-start': 32, 'interval-end': 63, 'count': 2}, {'interval-start': 64, 'interval-end': 127, 'count': 75}, {'interval-start': 128, 'interval-end': 255, 'count': 7}, {'interval-start': 256, 'interval-end': 511, 'count': 0}, {'interval-start': 512, 'interval-end': 1023, 'count': 6}, {'interval-start': 1024, 'interval-end': 2047, 'count': 3}, {'interval-start': 2048, 'interval-end': 4095, 'count': 31}]}

the key `data` is the list of the log2 histogram intervals. The `interval-start` and `interval-end` define the
latency bucket and `count` is the number of I/O's that lie in that latency range.

# ./biolatency.py -jF
^C
{'ts': '2020-12-30 14:37:59', 'val_type': 'usecs', 'data': [{'interval-start': 0, 'interval-end': 1, 'count': 0}, {'interval-start': 2, 'interval-end': 3, 'count': 0}, {'interval-start': 4, 'interval-end': 7, 'count': 0}, {'interval-start': 8, 'interval-end': 15, 'count': 0}, {'interval-start': 16, 'interval-end': 31, 'count': 1}, {'interval-start': 32, 'interval-end': 63, 'count': 1}, {'interval-start': 64, 'interval-end': 127, 'count': 0}, {'interval-start': 128, 'interval-end': 255, 'count': 0}, {'interval-start': 256, 'interval-end': 511, 'count': 0}, {'interval-start': 512, 'interval-end': 1023, 'count': 0}, {'interval-start': 1024, 'interval-end': 2047, 'count': 2}], 'flags': 'Sync-Write'}
{'ts': '2020-12-30 14:37:59', 'val_type': 'usecs', 'data': [{'interval-start': 0, 'interval-end': 1, 'count': 0}, {'interval-start': 2, 'interval-end': 3, 'count': 0}, {'interval-start': 4, 'interval-end': 7, 'count': 0}, {'interval-start': 8, 'interval-end': 15, 'count': 0}, {'interval-start': 16, 'interval-end': 31, 'count': 0}, {'interval-start': 32, 'interval-end': 63, 'count': 0}, {'interval-start': 64, 'interval-end': 127, 'count': 0}, {'interval-start': 128, 'interval-end': 255, 'count': 2}, {'interval-start': 256, 'interval-end': 511, 'count': 0}, {'interval-start': 512, 'interval-end': 1023, 'count': 2}, {'interval-start': 1024, 'interval-end': 2047, 'count': 1}], 'flags': 'Unknown'}
{'ts': '2020-12-30 14:37:59', 'val_type': 'usecs', 'data': [{'interval-start': 0, 'interval-end': 1, 'count': 0}, {'interval-start': 2, 'interval-end': 3, 'count': 0}, {'interval-start': 4, 'interval-end': 7, 'count': 0}, {'interval-start': 8, 'interval-end': 15, 'count': 0}, {'interval-start': 16, 'interval-end': 31, 'count': 0}, {'interval-start': 32, 'interval-end': 63, 'count': 0}, {'interval-start': 64, 'interval-end': 127, 'count': 0}, {'interval-start': 128, 'interval-end': 255, 'count': 0}, {'interval-start': 256, 'interval-end': 511, 'count': 0}, {'interval-start': 512, 'interval-end': 1023, 'count': 0}, {'interval-start': 1024, 'interval-end': 2047, 'count': 1}], 'flags': 'Write'}
{'ts': '2020-12-30 14:37:59', 'val_type': 'usecs', 'data': [{'interval-start': 0, 'interval-end': 1, 'count': 0}, {'interval-start': 2, 'interval-end': 3, 'count': 0}, {'interval-start': 4, 'interval-end': 7, 'count': 0}, {'interval-start': 8, 'interval-end': 15, 'count': 0}, {'interval-start': 16, 'interval-end': 31, 'count': 0}, {'interval-start': 32, 'interval-end': 63, 'count': 0}, {'interval-start': 64, 'interval-end': 127, 'count': 0}, {'interval-start': 128, 'interval-end': 255, 'count': 0}, {'interval-start': 256, 'interval-end': 511, 'count': 0}, {'interval-start': 512, 'interval-end': 1023, 'count': 4}], 'flags': 'Flush'}

The -j option used with -F prints a histogram dictionary per set of I/O flags.

# ./biolatency.py -jD
^C
{'ts': '2020-12-30 14:40:00', 'val_type': 'usecs', 'data': [{'interval-start': 0, 'interval-end': 1, 'count': 0}, {'interval-start': 2, 'interval-end': 3, 'count': 0}, {'interval-start': 4, 'interval-end': 7, 'count': 0}, {'interval-start': 8, 'interval-end': 15, 'count': 0}, {'interval-start': 16, 'interval-end': 31, 'count': 0}, {'interval-start': 32, 'interval-end': 63, 'count': 1}, {'interval-start': 64, 'interval-end': 127, 'count': 1}, {'interval-start': 128, 'interval-end': 255, 'count': 1}, {'interval-start': 256, 'interval-end': 511, 'count': 1}, {'interval-start': 512, 'interval-end': 1023, 'count': 6}, {'interval-start': 1024, 'interval-end': 2047, 'count': 1}, {'interval-start': 2048, 'interval-end': 4095, 'count': 3}], 'Bucket ptr': b'sda'}

The -j option used with -D prints a histogram dictionary per disk device.

# ./biolatency.py -jm
^C
{'ts': '2020-12-30 14:42:03', 'val_type': 'msecs', 'data': [{'interval-start': 0, 'interval-end': 1, 'count': 11}, {'interval-start': 2, 'interval-end': 3, 'count': 3}]}

The -j with -m prints a millisecond histogram dictionary. The `value_type` key is set to msecs.

USAGE message:

# ./biolatency -h
usage: biolatency.py [-h] [-T] [-Q] [-m] [-D] [-F] [-e] [-j] [-d DISK]
                     [interval] [count]

Summarize block device I/O latency as a histogram

positional arguments:
  interval              output interval, in seconds
  count                 number of outputs

optional arguments:
  -h, --help            show this help message and exit
  -T, --timestamp       include timestamp on output
  -Q, --queued          include OS queued time in I/O time
  -m, --milliseconds    millisecond histogram
  -D, --disks           print a histogram per disk device
  -F, --flags           print a histogram per set of I/O flags
  -e, --extension       summarize average/total value
  -j, --json            json output
  -d DISK, --disk DISK  Trace this disk only

examples:
    ./biolatency                    # summarize block I/O latency as a histogram
    ./biolatency 1 10               # print 1 second summaries, 10 times
    ./biolatency -mT 1              # 1s summaries, milliseconds, and timestamps
    ./biolatency -Q                 # include OS queued time in I/O time
    ./biolatency -D                 # show each disk device separately
    ./biolatency -F                 # show I/O flags separately
    ./biolatency -j                 # print a dictionary
    ./biolatency -e                 # show extension summary(total, average)
    ./biolatency -d sdc             # Trace sdc only

Filemanager

Name Type Size Permission Actions
lib Folder 0755
argdist_example.txt File 22.49 KB 0644
bashreadline_example.txt File 882 B 0644
bindsnoop_example.txt File 4.42 KB 0644
biolatency_example.txt File 23.46 KB 0644
biolatpcts_example.txt File 2.97 KB 0644
biopattern_example.txt File 1.37 KB 0644
biosnoop_example.txt File 3.47 KB 0644
biotop_example.txt File 9.11 KB 0644
bitesize_example.txt File 4.98 KB 0644
bpflist_example.txt File 2.13 KB 0644
btrfsdist_example.txt File 9.32 KB 0644
btrfsslower_example.txt File 6.65 KB 0644
cachestat_example.txt File 3.92 KB 0644
cachetop_example.txt File 3.83 KB 0644
capable_example.txt File 6.5 KB 0644
cobjnew_example.txt File 2.97 KB 0644
compactsnoop_example.txt File 9.92 KB 0644
cpudist_example.txt File 16.48 KB 0644
cpuunclaimed_example.txt File 15.2 KB 0644
criticalstat_example.txt File 4.81 KB 0644
cthreads_example.txt File 2.08 KB 0644
dbslower_example.txt File 3.89 KB 0644
dbstat_example.txt File 6.5 KB 0644
dcsnoop_example.txt File 4.27 KB 0644
dcstat_example.txt File 3.26 KB 0644
deadlock_example.txt File 16.25 KB 0644
dirtop_example.txt File 4.98 KB 0644
drsnoop_example.txt File 5 KB 0644
execsnoop_example.txt File 6.64 KB 0644
exitsnoop_example.txt File 6.22 KB 0644
ext4dist_example.txt File 8.78 KB 0644
ext4slower_example.txt File 11.07 KB 0644
filegone_example.txt File 743 B 0644
filelife_example.txt File 2.04 KB 0644
fileslower_example.txt File 5.58 KB 0644
filetop_example.txt File 6.8 KB 0644
funccount_example.txt File 13.29 KB 0644
funcinterval_example.txt File 15.28 KB 0644
funclatency_example.txt File 20.98 KB 0644
funcslower_example.txt File 6.63 KB 0644
gethostlatency_example.txt File 1.29 KB 0644
hardirqs_example.txt File 37.05 KB 0644
inject_example.txt File 6.67 KB 0644
javacalls_example.txt File 3.91 KB 0644
javaflow_example.txt File 5.88 KB 0644
javagc_example.txt File 3.78 KB 0644
javaobjnew_example.txt File 2.97 KB 0644
javastat_example.txt File 2.98 KB 0644
javathreads_example.txt File 2.08 KB 0644
killsnoop_example.txt File 1.31 KB 0644
klockstat_example.txt File 8.34 KB 0644
kvmexit_example.txt File 11.63 KB 0644
llcstat_example.txt File 3.24 KB 0644
mdflush_example.txt File 1.74 KB 0644
memleak_example.txt File 10.02 KB 0644
mountsnoop_example.txt File 1.45 KB 0644
mysqld_qslower_example.txt File 2.3 KB 0644
netqtop_example.txt File 12.2 KB 0644
nfsdist_example.txt File 8.31 KB 0644
nfsslower_example.txt File 7.68 KB 0644
nodegc_example.txt File 3.78 KB 0644
nodestat_example.txt File 2.98 KB 0644
offcputime_example.txt File 19.2 KB 0644
offwaketime_example.txt File 37.36 KB 0644
oomkill_example.txt File 1.88 KB 0644
opensnoop_example.txt File 10.33 KB 0644
perlcalls_example.txt File 3.91 KB 0644
perlflow_example.txt File 5.88 KB 0644
perlstat_example.txt File 2.98 KB 0644
phpcalls_example.txt File 3.91 KB 0644
phpflow_example.txt File 5.88 KB 0644
phpstat_example.txt File 2.98 KB 0644
pidpersec_example.txt File 677 B 0644
ppchcalls_example.txt File 6.93 KB 0644
profile_example.txt File 31.08 KB 0644
pythoncalls_example.txt File 3.91 KB 0644
pythonflow_example.txt File 5.88 KB 0644
pythongc_example.txt File 3.78 KB 0644
pythonstat_example.txt File 2.98 KB 0644
rdmaucma_example.txt File 1.94 KB 0644
readahead_example.txt File 3.17 KB 0644
reset-trace_example.txt File 9.15 KB 0644
rubycalls_example.txt File 3.91 KB 0644
rubyflow_example.txt File 5.88 KB 0644
rubygc_example.txt File 3.78 KB 0644
rubyobjnew_example.txt File 2.97 KB 0644
rubystat_example.txt File 2.98 KB 0644
runqlat_example.txt File 31.3 KB 0644
runqlen_example.txt File 11.85 KB 0644
runqslower_example.txt File 2.13 KB 0644
shmsnoop_example.txt File 2.73 KB 0644
slabratetop_example.txt File 5.22 KB 0644
sofdsnoop_example.txt File 3.14 KB 0644
softirqs_example.txt File 11.02 KB 0644
solisten_example.txt File 2.3 KB 0644
sslsniff_example.txt File 6.74 KB 0644
stackcount_example.txt File 21.45 KB 0644
statsnoop_example.txt File 3.02 KB 0644
swapin.txt File 2.57 KB 0644
swapin_example.txt File 1.39 KB 0644
syncsnoop_example.txt File 387 B 0644
syscount_example.txt File 6.27 KB 0644
tclcalls_example.txt File 3.91 KB 0644
tclflow_example.txt File 5.88 KB 0644
tclobjnew_example.txt File 2.97 KB 0644
tclstat_example.txt File 2.98 KB 0644
tcpaccept_example.txt File 2.76 KB 0644
tcpcong_example.txt File 33.31 KB 0644
tcpconnect_example.txt File 6.27 KB 0644
tcpconnlat_example.txt File 2.55 KB 0644
tcpdrop_example.txt File 1.95 KB 0644
tcplife_example.txt File 6.83 KB 0644
tcpretrans_example.txt File 3.85 KB 0644
tcprtt_example.txt File 9.83 KB 0644
tcpstates_example.txt File 2.84 KB 0644
tcpsubnet_example.txt File 5.37 KB 0644
tcpsynbl_example.txt File 1.15 KB 0644
tcptop_example.txt File 5.75 KB 0644
tcptracer_example.txt File 1.98 KB 0644
threadsnoop_example.txt File 1.07 KB 0644
tplist_example.txt File 4.4 KB 0644
trace_example.txt File 21.62 KB 0644
ttysnoop_example.txt File 3.24 KB 0644
vfscount_example.txt File 2.17 KB 0644
vfsstat_example.txt File 1.66 KB 0644
virtiostat_example.txt File 2.62 KB 0644
wakeuptime_example.txt File 33.25 KB 0644
xfsdist_example.txt File 6.77 KB 0644
xfsslower_example.txt File 6.91 KB 0644
zfsdist_example.txt File 9.52 KB 0644
zfsslower_example.txt File 7.37 KB 0644
Filemanager