Wifi-Capacity







Sun Apr 26 22:00:46 PDT 2020

PDF Report

Objective

The Candela WiFi Capacity test is designed to measure performance of an Access Point when handling different amounts of WiFi Stations. The test allows the user to increase the number of stations in user defined steps for each test iteration and measure the per station and the overall throughput for each trial. Along with throughput other measurements made are client connection times, Fairness, % packet loss, DHCP times and more. The expected behavior is for the AP to be able to handle several stations (within the limitations of the AP specs) and make sure all stations get a fair amount of airtime both in the upstream and downstream. An AP that scales well will not show a significant over-all throughput decrease as more stations are added.




Realtime Graph shows summary download and upload RX bps of connections created by this test.
Realtime BPS


Total bits-per-second transferred. This only counts the protocol payload, so it will not count the Ethernet, IP, UDP, TCP or other header overhead. A well behaving system will show about the same rate as stations increase. If the rate decreases significantly as stations increase, then it is not scaling well.
If selected, the Golden AP comparison graphs will be added. These tests were done in an isolation chamber, open encryption, conductive connection, with LANforge CT525 wave-1 3x3 NIC as the stations.
Total Kbps Received vs Number of Stations Active

Text Data for Kbps Upload/Download



Protocol-Data-Units received. For TCP, this does not mean much, but for UDP connections, this correlates to packet size. If the PDU size is larger than what fits into a single frame, then the network stack will segment it accordingly. A well behaving system will show about the same rate as stations increase. If the rate decreases significantly as stations increase, then it is not scaling well.
Total PDU/s Received vs Number of Stations Active

Text Data for Pps Upload/Download



Station disconnect stats. These will be only for the last iteration. If the 'Clear Reset Counters' option is selected, the stats are cleared after the initial association. Any re-connects reported indicate a potential stability issue. Can be used for long-term stability testing in cases where you bring up all stations in one iteration and then run the test for a longer duration.
Port Reset Totals


Station connect time is calculated from the initial Authenticate message through the completion of Open or RSN association/authentication.
Station Connect Times

Wifi-Capacity Test requested values
Station Increment: 1,2,5,10,20,45,60,100
Loop Iterations: Single (1)
Duration: 1 min (1 m)
Protocol: TCP-IPv4
Layer 4-7 Endpoint: NONE
Payload Size: AUTO
MSS AUTO
Total Download Rate: 1G (1 Gbps)
Total Upload Rate: 1G (1 Gbps)
Percentage TCP Rate: 10% (10%)
Set Bursty Minimum Speed: Burst Mode Disabled (-1)
Randomize Rates true
Leave Ports Up false
Socket buffer size: OS Default
Settle Time: 5 sec (5 s)
Rpt Timer: fast (1 s)
IP ToS: Best Effort (0)
Multi-Conn: AUTO
Show-Per-Iteration-Charts true
Show-Per-Loop-Totals true
Hunt-Lower-Rates false
Show Events true
Clear Reset Counters false
CSV Reporting Dir - not selected -
Build Date Sun Apr 26 10:17:18 PDT 2020
Build Version 5.4.2
Git Version 3cefa04ae56a5121e1ed74a79e7d09ad5f3b61c1
Ports 1.1.eth0 1.1.sta00000 1.1.sta00001 1.1.sta00002 1.1.sta00003 1.1.sta00004 1.1.sta00005 1.1.sta00006 1.1.sta00007 1.1.sta00008 1.1.sta00009 1.1.sta00010 1.1.sta00011 1.1.sta00012 1.1.sta00013 1.1.sta00014 1.1.sta00015 1.1.sta00016 1.1.sta00017 1.1.sta00018 1.1.sta00019 1.1.sta00020 1.1.sta00021 1.1.sta00022 1.1.sta00023 1.1.sta00024 1.1.sta00025 1.1.sta00026 1.1.sta00027 1.1.sta00028 1.1.sta00029 1.1.sta00030 1.1.sta00031 1.1.sta00032 1.1.sta00033 1.1.sta00034 1.1.sta00035 1.1.sta00036 1.1.sta00037 1.1.sta00038 1.1.sta00039 1.1.sta00040 1.1.sta00041 1.1.sta00042 1.1.sta00043 1.1.sta00044 1.1.sta00045 1.1.sta00046 1.1.sta00047 1.1.sta00048 1.1.sta00049 1.1.sta00050 1.1.sta00051 1.1.sta00052 1.1.sta00053 1.1.sta00054 1.1.sta00055 1.1.sta00056 1.1.sta00057 1.1.sta00058 1.1.sta00059 1.1.sta00060 1.1.sta00061 1.1.sta00062 1.1.sta00063 1.1.sta00500 1.1.sta00501 1.1.sta00502 1.1.sta00503 1.1.sta00504 1.1.sta00505 1.1.sta00506 1.1.sta00507 1.1.sta00508 1.1.sta00509 1.1.sta00510 1.1.sta00511 1.1.sta00512 1.1.sta00513 1.1.sta00514 1.1.sta00515 1.1.sta00516 1.1.sta00517 1.1.sta00518 1.1.sta00519 1.1.sta00520 1.1.sta00521 1.1.sta00522 1.1.sta00523 1.1.sta00524 1.1.sta00525 1.1.sta00526 1.1.sta00527 1.1.sta00528 1.1.sta00529 1.1.sta00530 1.1.sta00531 1.1.sta00532 1.1.sta00533 1.1.sta00534 1.1.sta00535 1.1.sta00536 1.1.sta00537 1.1.sta00538 1.1.sta00539 1.1.sta00540 1.1.sta00541 1.1.sta00542 1.1.sta00543 1.1.sta00544 1.1.sta00545 1.1.sta00546 1.1.sta00547 1.1.sta00548 1.1.sta00549 1.1.sta00550 1.1.sta00551 1.1.sta00552 1.1.sta00553 1.1.sta00554 1.1.sta00555 1.1.sta00556 1.1.sta00557 1.1.sta00558 1.1.sta00559 1.1.sta00560 1.1.sta00561 1.1.sta00562 1.1.sta00563 1.1.sta00564 1.1.sta00565 1.1.sta00566 1.1.sta00567 1.1.sta00568 1.1.sta00569 1.1.sta00570 1.1.sta00571 1.1.sta00572 1.1.sta00573 1.1.sta00574 1.1.sta00575 1.1.sta00576 1.1.sta00577 1.1.sta00578 1.1.sta00579 1.1.sta00580 1.1.sta00581 1.1.sta00582 1.1.sta00583 1.1.sta00584 1.1.sta00585 1.1.sta00586 1.1.sta00587 1.1.sta00588 1.1.sta00589 1.1.sta00590 1.1.sta00591 1.1.sta00592 1.1.sta00593 1.1.sta00594 1.1.sta00595 1.1.sta00596 1.1.sta00597 1.1.sta00598 1.1.sta00599 1.1.sta01000 1.1.sta01001 1.1.sta01002 1.1.sta01003 1.1.sta01004 1.1.sta01005 1.1.sta01006 1.1.sta01007 1.1.sta01008 1.1.sta01009 1.1.sta01010 1.1.sta01011 1.1.sta01012 1.1.sta01013 1.1.sta01014 1.1.sta01015 1.1.sta01016 1.1.sta01017 1.1.sta01018 1.1.sta01019 1.1.sta01020 1.1.sta01021 1.1.sta01022 1.1.sta01023 1.1.sta01024 1.1.sta01025 1.1.sta01026 1.1.sta01027 1.1.sta01028 1.1.sta01029 1.1.sta01030 1.1.sta01031 1.1.sta01032 1.1.sta01033 1.1.sta01034 1.1.sta01035 1.1.sta01036 1.1.sta01037 1.1.sta01038 1.1.sta01039 1.1.sta01040 1.1.sta01041 1.1.sta01042 1.1.sta01043 1.1.sta01044 1.1.sta01045 1.1.sta01046 1.1.sta01047 1.1.sta01048 1.1.sta01049 1.1.sta01050 1.1.sta01051 1.1.sta01052 1.1.sta01053 1.1.sta01054 1.1.sta01055 1.1.sta01056 1.1.sta01057 1.1.sta01058 1.1.sta01059 1.1.sta01060 1.1.sta01061 1.1.sta01062 1.1.sta01063
Firmware N/A 10.1-ct-8x-__xtH-022-db8cfc6c 0.3-0
Machines ben-ota-2




Requested Parameters:
Download Rate: Per station: 1000000000 (   1 Gbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station: 1000000000 (   1 Gbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 1   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min: 270.489 Mbps  Cx Ave: 270.489 Mbps  Cx Max: 270.489 Mbps  All Cx: 270.489 Mbps
Upload Rate:       Cx Min: 126.266 Mbps  Cx Ave: 126.266 Mbps  Cx Max: 126.266 Mbps  All Cx: 126.266 Mbps
                                                                                     Total: 396.755 Mbps
Aggregated Rate:   Min:    396.755 Mbps  Avg:    396.755 Mbps  Max:    396.755 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station: 1000000000 (   1 Gbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station: 1000000000 (   1 Gbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 1   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:     1.904 GB  Cx Ave:     1.904 GB  Cx Max:     1.904 GB  All Cx:     1.904 GB
Upload Amount:     Cx Min:   898.302 MB  Cx Ave:   898.302 MB  Cx Max:   898.302 MB  All Cx:   898.302 MB
                                                                                     Total:      2.782 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station: 500000000 ( 500 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station: 500000000 ( 500 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 2   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:  90.412 Mbps  Cx Ave: 115.391 Mbps  Cx Max:  140.37 Mbps  All Cx: 230.782 Mbps
Upload Rate:       Cx Min:  64.749 Mbps  Cx Ave:  67.885 Mbps  Cx Max:  71.022 Mbps  All Cx: 135.771 Mbps
                                                                                     Total: 366.553 Mbps
Aggregated Rate:   Min:     155.16 Mbps  Avg:    183.276 Mbps  Max:    211.392 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station: 500000000 ( 500 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station: 500000000 ( 500 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 2   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:   653.104 MB  Cx Ave:   831.609 MB  Cx Max: 1,010.115 MB  All Cx:     1.624 GB
Upload Amount:     Cx Min:   454.134 MB  Cx Ave:   464.603 MB  Cx Max:   475.073 MB  All Cx:   929.207 MB
                                                                                     Total:      2.532 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station: 200000000 ( 200 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station: 200000000 ( 200 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 5   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:  21.235 Mbps  Cx Ave:  36.156 Mbps  Cx Max:  63.202 Mbps  All Cx: 180.778 Mbps
Upload Rate:       Cx Min:  24.486 Mbps  Cx Ave:  30.035 Mbps  Cx Max:  33.441 Mbps  All Cx: 150.175 Mbps
                                                                                     Total: 330.953 Mbps
Aggregated Rate:   Min:     45.721 Mbps  Avg:     66.191 Mbps  Max:     96.644 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station: 200000000 ( 200 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station: 200000000 ( 200 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 5   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:   154.805 MB  Cx Ave:   261.689 MB  Cx Max:   455.912 MB  All Cx:     1.278 GB
Upload Amount:     Cx Min:   203.017 MB  Cx Ave:   211.932 MB  Cx Max:   227.819 MB  All Cx:     1.035 GB
                                                                                     Total:      2.313 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station: 100000000 ( 100 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station: 100000000 ( 100 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 10   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:   6.898 Mbps  Cx Ave:  18.495 Mbps  Cx Max:  54.201 Mbps  All Cx: 184.951 Mbps
Upload Rate:       Cx Min:   9.018 Mbps  Cx Ave:  13.153 Mbps  Cx Max:  21.968 Mbps  All Cx: 131.529 Mbps
                                                                                     Total:  316.48 Mbps
Aggregated Rate:   Min:     15.916 Mbps  Avg:     31.648 Mbps  Max:     76.169 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station: 100000000 ( 100 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station: 100000000 ( 100 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 10   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:    49.591 MB  Cx Ave:   133.578 MB  Cx Max:   390.632 MB  All Cx:     1.304 GB
Upload Amount:     Cx Min:    63.554 MB  Cx Ave:   102.611 MB  Cx Max:   146.259 MB  All Cx:     1.002 GB
                                                                                     Total:      2.307 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:  50000000 (  50 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:  50000000 (  50 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 20   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:   2.049 Mbps  Cx Ave:   5.641 Mbps  Cx Max:  20.587 Mbps  All Cx: 112.828 Mbps
Upload Rate:       Cx Min:   6.053 Mbps  Cx Ave:   8.673 Mbps  Cx Max:  11.197 Mbps  All Cx: 173.464 Mbps
                                                                                     Total: 286.292 Mbps
Aggregated Rate:   Min:      8.102 Mbps  Avg:     14.315 Mbps  Max:     31.784 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:  50000000 (  50 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:  50000000 (  50 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 20   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:     14.76 MB  Cx Ave:    40.633 MB  Cx Max:   147.968 MB  All Cx:   812.656 MB
Upload Amount:     Cx Min:    49.846 MB  Cx Ave:    62.402 MB  Cx Max:    69.572 MB  All Cx:     1.219 GB
                                                                                     Total:      2.012 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:  22222222 (22.222 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:  22222222 (22.222 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 45   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min: 332.282 Kbps  Cx Ave:   1.546 Mbps  Cx Max:  11.139 Mbps  All Cx:  69.587 Mbps
Upload Rate:       Cx Min:  12.749 Kbps  Cx Ave:   2.976 Mbps  Cx Max:   5.439 Mbps  All Cx: 133.899 Mbps
                                                                                     Total: 203.486 Mbps
Aggregated Rate:   Min:    345.031 Kbps  Avg:      4.522 Mbps  Max:     16.578 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:  22222222 (22.222 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:  22222222 (22.222 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 45   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:      2.37 MB  Cx Ave:     11.03 MB  Cx Max:    79.453 MB  All Cx:   496.334 MB
Upload Amount:     Cx Min:    12.821 MB  Cx Ave:    18.998 MB  Cx Max:    25.656 MB  All Cx:   854.894 MB
                                                                                     Total:       1.32 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:  16666666 (16.667 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:  16666666 (16.667 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 60   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:   1.075 Mbps  Cx Max:   5.753 Mbps  All Cx:  64.475 Mbps
Upload Rate:       Cx Min:        0 bps  Cx Ave:    1.61 Mbps  Cx Max:   3.234 Mbps  All Cx:  96.629 Mbps
                                                                                     Total: 161.104 Mbps
Aggregated Rate:   Min:           0 bps  Avg:      2.685 Mbps  Max:      8.987 Mbps
Non-Transmitting endpoints: (1)  tcp--1.eth0-01.sta00048-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:  16666666 (16.667 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:  16666666 (16.667 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 60   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:     7.644 MB  Cx Max:    40.927 MB  All Cx:   458.642 MB
Upload Amount:     Cx Min:          0 B  Cx Ave:     11.14 MB  Cx Max:    23.322 MB  All Cx:   668.392 MB
                                                                                     Total:      1.101 GB
Non-Transmitting endpoints: (1)  tcp--1.eth0-01.sta00048-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:  10000000 (  10 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:  10000000 (  10 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 100   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:   1.311 Mbps  Cx Max:  10.006 Mbps  All Cx: 131.098 Mbps
Upload Rate:       Cx Min:        0 bps  Cx Ave:    1.37 Mbps  Cx Max:   9.955 Mbps  All Cx: 137.043 Mbps
                                                                                     Total: 268.142 Mbps
Aggregated Rate:   Min:           0 bps  Avg:      2.681 Mbps  Max:     19.961 Mbps
Non-Transmitting endpoints: (13)  tcp--1.eth0-01.sta00046-A tcp--1.eth0-01.sta00048-A tcp--1.eth0-01.sta00049-A tcp--1.eth0-01.sta00050-A tcp--1.eth0-01.sta00051-A tcp--1.eth0-01.sta00052-A tcp--1.eth0-01.sta00053-A tcp--1.eth0-01.sta00054-A tcp--1.eth0-01.sta00055-A tcp--1.eth0-01.sta00056-A tcp--1.eth0-01.sta00057-A tcp--1.eth0-01.sta00058-A tcp--1.eth0-01.sta00059-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:  10000000 (  10 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:  10000000 (  10 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 100   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:     9.522 MB  Cx Max:    72.605 MB  All Cx:   952.169 MB
Upload Amount:     Cx Min:          0 B  Cx Ave:    10.443 MB  Cx Max:    73.086 MB  All Cx:      1.02 GB
                                                                                     Total:       1.95 GB
Non-Transmitting endpoints: (13)  tcp--1.eth0-01.sta00046-A tcp--1.eth0-01.sta00048-A tcp--1.eth0-01.sta00049-A tcp--1.eth0-01.sta00050-A tcp--1.eth0-01.sta00051-A tcp--1.eth0-01.sta00052-A tcp--1.eth0-01.sta00053-A tcp--1.eth0-01.sta00054-A tcp--1.eth0-01.sta00055-A tcp--1.eth0-01.sta00056-A tcp--1.eth0-01.sta00057-A tcp--1.eth0-01.sta00058-A tcp--1.eth0-01.sta00059-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:   7142857 (7.143 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:   7142857 (7.143 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 140   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:   1.107 Mbps  Cx Max:   7.376 Mbps  All Cx: 154.942 Mbps
Upload Rate:       Cx Min:        0 bps  Cx Ave:   1.145 Mbps  Cx Max:   6.585 Mbps  All Cx: 160.323 Mbps
                                                                                     Total: 315.265 Mbps
Aggregated Rate:   Min:           0 bps  Avg:      2.252 Mbps  Max:     13.961 Mbps
Non-Transmitting endpoints: (23)  tcp--1.eth0-01.sta00502-A tcp--1.eth0-01.sta00504-A tcp--1.eth0-01.sta00505-A tcp--1.eth0-01.sta00506-A tcp--1.eth0-01.sta00507-A tcp--1.eth0-01.sta00508-A tcp--1.eth0-01.sta00509-A tcp--1.eth0-01.sta00510-A tcp--1.eth0-01.sta00520-A tcp--1.eth0-01.sta00521-A tcp--1.eth0-01.sta00522-A tcp--1.eth0-01.sta00523-A tcp--1.eth0-01.sta00524-A tcp--1.eth0-01.sta00525-A tcp--1.eth0-01.sta00526-A tcp--1.eth0-01.sta00527-A tcp--1.eth0-01.sta00528-A tcp--1.eth0-01.sta00529-A tcp--1.eth0-01.sta00530-A tcp--1.eth0-01.sta00531-A tcp--1.eth0-01.sta00532-A tcp--1.eth0-01.sta00533-A tcp--1.eth0-01.sta00574-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:   7142857 (7.143 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:   7142857 (7.143 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 140   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:     8.154 MB  Cx Max:    53.647 MB  All Cx:     1.115 GB
Upload Amount:     Cx Min:          0 B  Cx Ave:     8.005 MB  Cx Max:    37.032 MB  All Cx:     1.094 GB
                                                                                     Total:      2.209 GB
Non-Transmitting endpoints: (22)  tcp--1.eth0-01.sta00502-A tcp--1.eth0-01.sta00504-A tcp--1.eth0-01.sta00505-A tcp--1.eth0-01.sta00506-A tcp--1.eth0-01.sta00507-A tcp--1.eth0-01.sta00508-A tcp--1.eth0-01.sta00509-A tcp--1.eth0-01.sta00510-A tcp--1.eth0-01.sta00520-A tcp--1.eth0-01.sta00521-A tcp--1.eth0-01.sta00522-A tcp--1.eth0-01.sta00523-A tcp--1.eth0-01.sta00524-A tcp--1.eth0-01.sta00525-A tcp--1.eth0-01.sta00526-A tcp--1.eth0-01.sta00527-A tcp--1.eth0-01.sta00528-A tcp--1.eth0-01.sta00529-A tcp--1.eth0-01.sta00530-A tcp--1.eth0-01.sta00531-A tcp--1.eth0-01.sta00532-A tcp--1.eth0-01.sta00533-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:   5555555 (5.556 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:   5555555 (5.556 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 180   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:   1.413 Mbps  Cx Max:   5.655 Mbps  All Cx:  254.29 Mbps
Upload Rate:       Cx Min:        0 bps  Cx Ave:   1.264 Mbps  Cx Max:   5.647 Mbps  All Cx: 227.595 Mbps
                                                                                     Total: 481.886 Mbps
Aggregated Rate:   Min:           0 bps  Avg:      2.677 Mbps  Max:     11.302 Mbps
Non-Transmitting endpoints: (7)  tcp--1.eth0-01.sta00511-A tcp--1.eth0-01.sta00512-A tcp--1.eth0-01.sta00513-A tcp--1.eth0-01.sta00514-A tcp--1.eth0-01.sta00515-A tcp--1.eth0-01.sta00516-A tcp--1.eth0-01.sta00520-A tcp--1.eth0-01.sta00521-A tcp--1.eth0-01.sta00598-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:   5555555 (5.556 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:   5555555 (5.556 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 180   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:    10.417 MB  Cx Max:    40.607 MB  All Cx:     1.831 GB
Upload Amount:     Cx Min:          0 B  Cx Ave:    10.766 MB  Cx Max:    40.726 MB  All Cx:     1.893 GB
                                                                                     Total:      3.724 GB
Non-Transmitting endpoints: (7)  tcp--1.eth0-01.sta00511-A tcp--1.eth0-01.sta00512-A tcp--1.eth0-01.sta00513-A tcp--1.eth0-01.sta00514-A tcp--1.eth0-01.sta00515-A tcp--1.eth0-01.sta00516-A tcp--1.eth0-01.sta00520-A tcp--1.eth0-01.sta00521-A tcp--1.eth0-01.sta00598-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:   4545454 (4.545 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:   4545454 (4.545 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 220   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave: 996.446 Kbps  Cx Max:   2.978 Mbps  All Cx: 219.218 Mbps
Upload Rate:       Cx Min:        0 bps  Cx Ave:   1.454 Mbps  Cx Max:   6.083 Mbps  All Cx: 319.941 Mbps
                                                                                     Total: 539.159 Mbps
Aggregated Rate:   Min:           0 bps  Avg:      2.451 Mbps  Max:      9.061 Mbps
Non-Transmitting endpoints: (12)  tcp--1.eth0-01.sta00048-A tcp--1.eth0-01.sta00049-A tcp--1.eth0-01.sta00050-A tcp--1.eth0-01.sta00051-A tcp--1.eth0-01.sta00052-A tcp--1.eth0-01.sta00053-A tcp--1.eth0-01.sta00054-A tcp--1.eth0-01.sta00055-A tcp--1.eth0-01.sta00056-A tcp--1.eth0-01.sta00061-A tcp--1.eth0-01.sta00062-A tcp--1.eth0-01.sta00063-A tcp--1.eth0-01.sta00504-A tcp--1.eth0-01.sta00598-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:   4545454 (4.545 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:   4545454 (4.545 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 220   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:     7.115 MB  Cx Max:    21.277 MB  All Cx:     1.529 GB
Upload Amount:     Cx Min:          0 B  Cx Ave:      8.13 MB  Cx Max:    20.363 MB  All Cx:     1.747 GB
                                                                                     Total:      3.275 GB
Non-Transmitting endpoints: (12)  tcp--1.eth0-01.sta00048-A tcp--1.eth0-01.sta00049-A tcp--1.eth0-01.sta00050-A tcp--1.eth0-01.sta00051-A tcp--1.eth0-01.sta00052-A tcp--1.eth0-01.sta00053-A tcp--1.eth0-01.sta00054-A tcp--1.eth0-01.sta00055-A tcp--1.eth0-01.sta00056-A tcp--1.eth0-01.sta00061-A tcp--1.eth0-01.sta00062-A tcp--1.eth0-01.sta00063-A tcp--1.eth0-01.sta00504-A tcp--1.eth0-01.sta00598-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:   4385964 (4.386 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:   4385964 (4.386 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 228   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave: 553.163 Kbps  Cx Max:   1.513 Mbps  All Cx: 126.121 Mbps
Upload Rate:       Cx Min:        0 bps  Cx Ave: 868.768 Kbps  Cx Max:   2.372 Mbps  All Cx: 198.079 Mbps
                                                                                     Total: 324.201 Mbps
Aggregated Rate:   Min:           0 bps  Avg:      1.422 Mbps  Max:      3.886 Mbps
Non-Transmitting endpoints: (18)  tcp--1.eth0-01.sta00052-A tcp--1.eth0-01.sta00053-A tcp--1.eth0-01.sta00054-A tcp--1.eth0-01.sta00055-A tcp--1.eth0-01.sta00056-A tcp--1.eth0-01.sta00057-A tcp--1.eth0-01.sta00058-A tcp--1.eth0-01.sta00059-A tcp--1.eth0-01.sta00060-A tcp--1.eth0-01.sta00061-A tcp--1.eth0-01.sta00504-A tcp--1.eth0-01.sta00505-A tcp--1.eth0-01.sta00507-A tcp--1.eth0-01.sta00511-A tcp--1.eth0-01.sta00512-A tcp--1.eth0-01.sta00513-A tcp--1.eth0-01.sta00514-A tcp--1.eth0-01.sta00515-A tcp--1.eth0-01.sta00516-A tcp--1.eth0-01.sta00598-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:   4385964 (4.386 Mbps)  All:   1000000000 (   1 Gbps)
Upload Rate:   Per station:   4385964 (4.386 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    2000000000 (   2 Gbps)
Station count: 228   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:     3.685 MB  Cx Max:    10.069 MB  All Cx:   840.292 MB
Upload Amount:     Cx Min:          0 B  Cx Ave:     5.108 MB  Cx Max:    14.513 MB  All Cx:     1.137 GB
                                                                                     Total:      1.958 GB
Non-Transmitting endpoints: (18)  tcp--1.eth0-01.sta00052-A tcp--1.eth0-01.sta00053-A tcp--1.eth0-01.sta00054-A tcp--1.eth0-01.sta00055-A tcp--1.eth0-01.sta00056-A tcp--1.eth0-01.sta00057-A tcp--1.eth0-01.sta00058-A tcp--1.eth0-01.sta00059-A tcp--1.eth0-01.sta00060-A tcp--1.eth0-01.sta00061-A tcp--1.eth0-01.sta00504-A tcp--1.eth0-01.sta00505-A tcp--1.eth0-01.sta00507-A tcp--1.eth0-01.sta00511-A tcp--1.eth0-01.sta00512-A tcp--1.eth0-01.sta00513-A tcp--1.eth0-01.sta00514-A tcp--1.eth0-01.sta00515-A tcp--1.eth0-01.sta00516-A tcp--1.eth0-01.sta00598-A

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph



Maximum Stations Connected: 228
Stations NOT connected at this time: 0
Maximum Stations with IP Address: 228
Stations without IP at this time: 0

Station Maximums


RF stats give an indication of how well how congested is the RF environment. Channel activity is what the wifi radio reports as the busy-time for the RF environment. It is expected that this be near 100% when LANforge is running at max speed, but at lower speeds, this should be a lower percentage unless the RF environment is busy with other systems.

RF Stats for Stations

RX-Signal and Activity Data



Link rate stats give an indication of how well the rate-control is working. For rate-control, the 'RX' link rate corresponds to what the device-under-test is transmitting. If all of the stations are on the same radio, then the TX and RX encoding rates should be similar for all stations. If there is a definite pattern where some stations do not get good RX rate, then probably the device-under-test has rate-control problems. The TX rate is what LANforge is transmitting at.

Link Rate for Stations

TX/RX Link Rate Data


Key Performance Indicators CSV



Scan Results for SSIDs used in this test.

BSS 30:23:03:81:9c:28(on sta00000) -- associated
	TSF: 0 usec (0d, 00:00:00)
	freq: 5180
	beacon interval: 100 TUs
	capability: ESS (0x0001)
	signal: -19.00 dBm
	last seen: 98 ms ago
	Information elements from Probe Response frame:
	SSID: OpenWrt-5lo
	Supported rates: 6.0* 9.0 12.0* 18.0 24.0* 36.0 48.0 54.0 
	DS Parameter set: channel 36
	BSS Load:
		 * station count: 64
		 * channel utilisation: 35/255
		 * available admission capacity: 0 [*32us]
	Supported operating classes:
		 * current operating class: 128
	HT capabilities:
		Capabilities: 0x9ef
			RX LDPC
			HT20/HT40
			SM Power Save disabled
			RX HT20 SGI
			RX HT40 SGI
			TX STBC
			RX STBC 1-stream
			Max AMSDU length: 7935 bytes
			No DSSS/CCK HT40
		Maximum RX AMPDU length 65535 bytes (exponent: 0x003)
		Minimum RX AMPDU time spacing: 8 usec (0x06)
		HT TX/RX MCS rate indexes supported: 0-15
	HT operation:
		 * primary channel: 36
		 * secondary channel offset: above
		 * STA channel width: any
		 * RIFS: 0
		 * HT protection: no
		 * non-GF present: 1
		 * OBSS non-GF present: 0
		 * dual beacon: 0
		 * dual CTS protection: 0
		 * STBC beacon: 0
		 * L-SIG TXOP Prot: 0
		 * PCO active: 0
		 * PCO phase: 0
	Extended capabilities:
		 * Extended Channel Switching
		 * UTF-8 SSID
		 * Operating Mode Notification
		 * Max Number Of MSDUs In A-MSDU is unlimited
	VHT capabilities:
		VHT Capabilities (0x338819b2):
			Max MPDU length: 11454
			Supported Channel Width: neither 160 nor 80+80
			RX LDPC
			short GI (80 MHz)
			TX STBC
			SU Beamformer
			SU Beamformee
			MU Beamformer
			RX antenna pattern consistency
			TX antenna pattern consistency
		VHT RX MCS set:
			1 streams: MCS 0-9
			2 streams: MCS 0-9
			3 streams: not supported
			4 streams: not supported
			5 streams: not supported
			6 streams: not supported
			7 streams: not supported
			8 streams: not supported
		VHT RX highest supported: 0 Mbps
		VHT TX MCS set:
			1 streams: MCS 0-9
			2 streams: MCS 0-9
			3 streams: not supported
			4 streams: not supported
			5 streams: not supported
			6 streams: not supported
			7 streams: not supported
			8 streams: not supported
		VHT TX highest supported: 0 Mbps
	VHT operation:
		 * channel width: 1 (80 MHz)
		 * center freq segment 1: 42
		 * center freq segment 2: 0
		 * VHT basic MCS set: 0xfffc
	Transmit Power Envelope:
		 * Local Maximum Transmit Power For 20 MHz: 23 dBm
		 * Local Maximum Transmit Power For 40 MHz: 23 dBm
		 * Local Maximum Transmit Power For 80 MHz: 23 dBm
	WMM:	 * Parameter version 1
		 * u-APSD
		 * BE: CW 15-1023, AIFSN 3
		 * BK: CW 15-1023, AIFSN 7
		 * VI: CW 7-15, AIFSN 2, TXOP 3008 usec
		 * VO: CW 3-7, AIFSN 2, TXOP 1504 usec


BSS 30:23:03:81:9c:27(on sta00500) -- associated
	TSF: 0 usec (0d, 00:00:00)
	freq: 2462
	beacon interval: 100 TUs
	capability: ESS ShortPreamble ShortSlotTime (0x0421)
	signal: -16.00 dBm
	last seen: 25 ms ago
	Information elements from Probe Response frame:
	SSID: OpenWrt-2
	Supported rates: 1.0* 2.0* 5.5* 11.0* 6.0 9.0 12.0 18.0 
	DS Parameter set: channel 11
	ERP: <no flags>
	Extended supported rates: 24.0 36.0 48.0 54.0 
	BSS Load:
		 * station count: 100
		 * channel utilisation: 104/255
		 * available admission capacity: 0 [*32us]
	Supported operating classes:
		 * current operating class: 81
	HT capabilities:
		Capabilities: 0x19ed
			RX LDPC
			HT20
			SM Power Save disabled
			RX HT20 SGI
			RX HT40 SGI
			TX STBC
			RX STBC 1-stream
			Max AMSDU length: 7935 bytes
			DSSS/CCK HT40
		Maximum RX AMPDU length 65535 bytes (exponent: 0x003)
		Minimum RX AMPDU time spacing: 8 usec (0x06)
		HT TX/RX MCS rate indexes supported: 0-15
	HT operation:
		 * primary channel: 11
		 * secondary channel offset: no secondary
		 * STA channel width: 20 MHz
		 * RIFS: 0
		 * HT protection: no
		 * non-GF present: 1
		 * OBSS non-GF present: 0
		 * dual beacon: 0
		 * dual CTS protection: 0
		 * STBC beacon: 0
		 * L-SIG TXOP Prot: 0
		 * PCO active: 0
		 * PCO phase: 0
	Extended capabilities:
		 * Extended Channel Switching
		 * UTF-8 SSID
		 * Operating Mode Notification
	WMM:	 * Parameter version 1
		 * u-APSD
		 * BE: CW 15-1023, AIFSN 3
		 * BK: CW 15-1023, AIFSN 7
		 * VI: CW 7-15, AIFSN 2, TXOP 3008 usec
		 * VO: CW 3-7, AIFSN 2, TXOP 1504 usec


BSS 32:23:03:81:9c:29(on sta01000) -- associated
	TSF: 0 usec (0d, 00:00:00)
	freq: 5745
	beacon interval: 100 TUs
	capability: ESS (0x0001)
	signal: -17.00 dBm
	last seen: 130 ms ago
	Information elements from Probe Response frame:
	SSID: OpenWrt-5hi
	Supported rates: 6.0* 9.0 12.0* 18.0 24.0* 36.0 48.0 54.0 
	DS Parameter set: channel 149
	BSS Load:
		 * station count: 64
		 * channel utilisation: 17/255
		 * available admission capacity: 0 [*32us]
	Supported operating classes:
		 * current operating class: 128
	HT capabilities:
		Capabilities: 0x9ef
			RX LDPC
			HT20/HT40
			SM Power Save disabled
			RX HT20 SGI
			RX HT40 SGI
			TX STBC
			RX STBC 1-stream
			Max AMSDU length: 7935 bytes
			No DSSS/CCK HT40
		Maximum RX AMPDU length 65535 bytes (exponent: 0x003)
		Minimum RX AMPDU time spacing: 8 usec (0x06)
		HT TX/RX MCS rate indexes supported: 0-15
	HT operation:
		 * primary channel: 149
		 * secondary channel offset: above
		 * STA channel width: any
		 * RIFS: 0
		 * HT protection: no
		 * non-GF present: 1
		 * OBSS non-GF present: 0
		 * dual beacon: 0
		 * dual CTS protection: 0
		 * STBC beacon: 0
		 * L-SIG TXOP Prot: 0
		 * PCO active: 0
		 * PCO phase: 0
	Extended capabilities:
		 * Extended Channel Switching
		 * UTF-8 SSID
		 * Operating Mode Notification
		 * Max Number Of MSDUs In A-MSDU is unlimited
	VHT capabilities:
		VHT Capabilities (0x338819b2):
			Max MPDU length: 11454
			Supported Channel Width: neither 160 nor 80+80
			RX LDPC
			short GI (80 MHz)
			TX STBC
			SU Beamformer
			SU Beamformee
			MU Beamformer
			RX antenna pattern consistency
			TX antenna pattern consistency
		VHT RX MCS set:
			1 streams: MCS 0-9
			2 streams: MCS 0-9
			3 streams: not supported
			4 streams: not supported
			5 streams: not supported
			6 streams: not supported
			7 streams: not supported
			8 streams: not supported
		VHT RX highest supported: 0 Mbps
		VHT TX MCS set:
			1 streams: MCS 0-9
			2 streams: MCS 0-9
			3 streams: not supported
			4 streams: not supported
			5 streams: not supported
			6 streams: not supported
			7 streams: not supported
			8 streams: not supported
		VHT TX highest supported: 0 Mbps
	VHT operation:
		 * channel width: 1 (80 MHz)
		 * center freq segment 1: 155
		 * center freq segment 2: 0
		 * VHT basic MCS set: 0xfffc
	Transmit Power Envelope:
		 * Local Maximum Transmit Power For 20 MHz: 30 dBm
		 * Local Maximum Transmit Power For 40 MHz: 30 dBm
		 * Local Maximum Transmit Power For 80 MHz: 30 dBm
	WMM:	 * Parameter version 1
		 * u-APSD
		 * BE: CW 15-1023, AIFSN 3
		 * BK: CW 15-1023, AIFSN 7
		 * VI: CW 7-15, AIFSN 2, TXOP 3008 usec
		 * VO: CW 3-7, AIFSN 2, TXOP 1504 usec



Generated by Candela Technologies LANforge network testing tool.
www.candelatech.com