__  __    __   __  _____      _            _          _____ _          _ _ 
 |  \/  |   \ \ / / |  __ \    (_)          | |        / ____| |        | | |
 | \  / |_ __\ V /  | |__) | __ ___   ____ _| |_ ___  | (___ | |__   ___| | |
 | |\/| | '__|> <   |  ___/ '__| \ \ / / _` | __/ _ \  \___ \| '_ \ / _ \ | |
 | |  | | |_ / . \  | |   | |  | |\ V / (_| | ||  __/  ____) | | | |  __/ | |
 |_|  |_|_(_)_/ \_\ |_|   |_|  |_| \_/ \__,_|\__\___| |_____/|_| |_|\___V 2.1
 if you need WebShell for Seo everyday contact me on Telegram
 Telegram Address : @jackleet
        
        
For_More_Tools: Telegram: @jackleet | Bulk Smtp support mail sender | Business Mail Collector | Mail Bouncer All Mail | Bulk Office Mail Validator | Html Letter private



Upload:

Command:

www-data@216.73.216.10: ~ $
Demonstrations of xfsdist, the Linux bpftrace/eBPF version.


xfsdist traces XFS reads, writes, opens, and fsyncs, and summarizes their
latency as a power-of-2 histogram. For example:

# xfsdist.bt
Attaching 9 probes...
Tracing XFS operation latency... Hit Ctrl-C to end.
^C

@us[xfs_file_write_iter]:
[8, 16)                1 |@@@@@@@@@@@@@@@@@@@@@@@@@@                          |
[16, 32)               2 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|

@us[xfs_file_read_iter]:
[1]                  724 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[2, 4)               137 |@@@@@@@@@                                           |
[4, 8)               143 |@@@@@@@@@@                                          |
[8, 16)               37 |@@                                                  |
[16, 32)              11 |                                                    |
[32, 64)              22 |@                                                   |
[64, 128)              7 |                                                    |
[128, 256)             0 |                                                    |
[256, 512)           485 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@                  |
[512, 1K)            149 |@@@@@@@@@@                                          |
[1K, 2K)              98 |@@@@@@@                                             |
[2K, 4K)              85 |@@@@@@                                              |
[4K, 8K)              27 |@                                                   |
[8K, 16K)             29 |@@                                                  |
[16K, 32K)            25 |@                                                   |
[32K, 64K)             1 |                                                    |
[64K, 128K)            0 |                                                    |
[128K, 256K)           6 |                                                    |

@us[xfs_file_open]:
[1]                 1819 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[2, 4)               272 |@@@@@@@                                             |
[4, 8)                 0 |                                                    |
[8, 16)                9 |                                                    |
[16, 32)               7 |                                                    |

This output shows a bi-modal distribution for read latency, with a faster
mode of 724 reads that took between 0 and 1 microseconds, and a slower
mode of over 485 reads that took between 256 and 512 microseconds. It's
likely that the faster mode was a hit from the in-memory file system cache,
and the slower mode is a read from a storage device (disk).

This "latency" is measured from when the operation was issued from the VFS
interface to the file system, to when it completed. This spans everything:
block device I/O (disk I/O), file system CPU cycles, file system locks, run
queue latency, etc. This is a better measure of the latency suffered by
applications reading from the file system than measuring this down at the
block device interface.

Note that this only traces the common file system operations previously
listed: other file system operations (eg, inode operations including
getattr()) are not traced.


There is another version of this tool in bcc: https://github.com/iovisor/bcc
The bcc version provides command line options to customize the output.

Filemanager

Name Type Size Permission Actions
bashreadline_example.txt File 722 B 0644
biolatency_example.txt File 1.75 KB 0644
biosnoop_example.txt File 2.01 KB 0644
biostacks_example.txt File 1.87 KB 0644
bitesize_example.txt File 2.93 KB 0644
capable_example.txt File 2.6 KB 0644
cpuwalk_example.txt File 4.8 KB 0644
dcsnoop_example.txt File 4.5 KB 0644
execsnoop_example.txt File 1.5 KB 0644
gethostlatency_example.txt File 923 B 0644
killsnoop_example.txt File 846 B 0644
loads_example.txt File 864 B 0644
mdflush_example.txt File 1.82 KB 0644
naptime_example.txt File 844 B 0644
oomkill_example.txt File 1.63 KB 0644
opensnoop_example.txt File 2.47 KB 0644
pidpersec_example.txt File 1.47 KB 0644
runqlat_example.txt File 8.43 KB 0644
runqlen_example.txt File 980 B 0644
setuids_example.txt File 2.38 KB 0644
ssllatency_example.txt File 4.4 KB 0644
sslsnoop_example.txt File 1.87 KB 0644
statsnoop_example.txt File 2.67 KB 0644
swapin_example.txt File 549 B 0644
syncsnoop_example.txt File 541 B 0644
syscount_example.txt File 1.12 KB 0644
tcpaccept_example.txt File 1.32 KB 0644
tcpconnect_example.txt File 1.06 KB 0644
tcpdrop_example.txt File 1.23 KB 0644
tcplife_example.txt File 1.56 KB 0644
tcpretrans_example.txt File 1.13 KB 0644
tcpsynbl_example.txt File 940 B 0644
threadsnoop_example.txt File 1.15 KB 0644
undump_example.txt File 680 B 0644
vfscount_example.txt File 1.17 KB 0644
vfsstat_example.txt File 929 B 0644
writeback_example.txt File 1.92 KB 0644
xfsdist_example.txt File 3.34 KB 0644
Filemanager