Wifi-Capacity







Sun Apr 26 21:43:31 PDT 2020

PDF Report

Objective

The Candela WiFi Capacity test is designed to measure performance of an Access Point when handling different amounts of WiFi Stations. The test allows the user to increase the number of stations in user defined steps for each test iteration and measure the per station and the overall throughput for each trial. Along with throughput other measurements made are client connection times, Fairness, % packet loss, DHCP times and more. The expected behavior is for the AP to be able to handle several stations (within the limitations of the AP specs) and make sure all stations get a fair amount of airtime both in the upstream and downstream. An AP that scales well will not show a significant over-all throughput decrease as more stations are added.




Realtime Graph shows summary download and upload RX bps of connections created by this test.
Realtime BPS


Total bits-per-second transferred. This only counts the protocol payload, so it will not count the Ethernet, IP, UDP, TCP or other header overhead. A well behaving system will show about the same rate as stations increase. If the rate decreases significantly as stations increase, then it is not scaling well.
If selected, the Golden AP comparison graphs will be added. These tests were done in an isolation chamber, open encryption, conductive connection, with LANforge CT525 wave-1 3x3 NIC as the stations.
Total Kbps Received vs Number of Stations Active

Text Data for Kbps Upload/Download



Protocol-Data-Units received. For TCP, this does not mean much, but for UDP connections, this correlates to packet size. If the PDU size is larger than what fits into a single frame, then the network stack will segment it accordingly. A well behaving system will show about the same rate as stations increase. If the rate decreases significantly as stations increase, then it is not scaling well.
Total PDU/s Received vs Number of Stations Active

Text Data for Pps Upload/Download



Station disconnect stats. These will be only for the last iteration. If the 'Clear Reset Counters' option is selected, the stats are cleared after the initial association. Any re-connects reported indicate a potential stability issue. Can be used for long-term stability testing in cases where you bring up all stations in one iteration and then run the test for a longer duration.
Port Reset Totals


Station connect time is calculated from the initial Authenticate message through the completion of Open or RSN association/authentication.
Station Connect Times

Wifi-Capacity Test requested values
Station Increment: 1,2,5,10,20,45,60,100
Loop Iterations: Single (1)
Duration: 1 min (1 m)
Protocol: UDP-IPv4
Layer 4-7 Endpoint: NONE
Payload Size: AUTO
MSS AUTO
Total Download Rate: Zero (0 bps)
Total Upload Rate: 1G (1 Gbps)
Percentage TCP Rate: 10% (10%)
Set Bursty Minimum Speed: Burst Mode Disabled (-1)
Randomize Rates true
Leave Ports Up false
Socket buffer size: OS Default
Settle Time: 5 sec (5 s)
Rpt Timer: fast (1 s)
IP ToS: Best Effort (0)
Multi-Conn: AUTO
Show-Per-Iteration-Charts true
Show-Per-Loop-Totals true
Hunt-Lower-Rates false
Show Events true
Clear Reset Counters false
CSV Reporting Dir - not selected -
Build Date Sun Apr 26 10:17:18 PDT 2020
Build Version 5.4.2
Git Version 3cefa04ae56a5121e1ed74a79e7d09ad5f3b61c1
Ports 1.1.eth0 1.1.sta00000 1.1.sta00001 1.1.sta00002 1.1.sta00003 1.1.sta00004 1.1.sta00005 1.1.sta00006 1.1.sta00007 1.1.sta00008 1.1.sta00009 1.1.sta00010 1.1.sta00011 1.1.sta00012 1.1.sta00013 1.1.sta00014 1.1.sta00015 1.1.sta00016 1.1.sta00017 1.1.sta00018 1.1.sta00019 1.1.sta00020 1.1.sta00021 1.1.sta00022 1.1.sta00023 1.1.sta00024 1.1.sta00025 1.1.sta00026 1.1.sta00027 1.1.sta00028 1.1.sta00029 1.1.sta00030 1.1.sta00031 1.1.sta00032 1.1.sta00033 1.1.sta00034 1.1.sta00035 1.1.sta00036 1.1.sta00037 1.1.sta00038 1.1.sta00039 1.1.sta00040 1.1.sta00041 1.1.sta00042 1.1.sta00043 1.1.sta00044 1.1.sta00045 1.1.sta00046 1.1.sta00047 1.1.sta00048 1.1.sta00049 1.1.sta00050 1.1.sta00051 1.1.sta00052 1.1.sta00053 1.1.sta00054 1.1.sta00055 1.1.sta00056 1.1.sta00057 1.1.sta00058 1.1.sta00059 1.1.sta00060 1.1.sta00061 1.1.sta00062 1.1.sta00063 1.1.sta00500 1.1.sta00501 1.1.sta00502 1.1.sta00503 1.1.sta00504 1.1.sta00505 1.1.sta00506 1.1.sta00507 1.1.sta00508 1.1.sta00509 1.1.sta00510 1.1.sta00511 1.1.sta00512 1.1.sta00513 1.1.sta00514 1.1.sta00515 1.1.sta00516 1.1.sta00517 1.1.sta00518 1.1.sta00519 1.1.sta00520 1.1.sta00521 1.1.sta00522 1.1.sta00523 1.1.sta00524 1.1.sta00525 1.1.sta00526 1.1.sta00527 1.1.sta00528 1.1.sta00529 1.1.sta00530 1.1.sta00531 1.1.sta00532 1.1.sta00533 1.1.sta00534 1.1.sta00535 1.1.sta00536 1.1.sta00537 1.1.sta00538 1.1.sta00539 1.1.sta00540 1.1.sta00541 1.1.sta00542 1.1.sta00543 1.1.sta00544 1.1.sta00545 1.1.sta00546 1.1.sta00547 1.1.sta00548 1.1.sta00549 1.1.sta00550 1.1.sta00551 1.1.sta00552 1.1.sta00553 1.1.sta00554 1.1.sta00555 1.1.sta00556 1.1.sta00557 1.1.sta00558 1.1.sta00559 1.1.sta00560 1.1.sta00561 1.1.sta00562 1.1.sta00563 1.1.sta00564 1.1.sta00565 1.1.sta00566 1.1.sta00567 1.1.sta00568 1.1.sta00569 1.1.sta00570 1.1.sta00571 1.1.sta00572 1.1.sta00573 1.1.sta00574 1.1.sta00575 1.1.sta00576 1.1.sta00577 1.1.sta00578 1.1.sta00579 1.1.sta00580 1.1.sta00581 1.1.sta00582 1.1.sta00583 1.1.sta00584 1.1.sta00585 1.1.sta00586 1.1.sta00587 1.1.sta00588 1.1.sta00589 1.1.sta00590 1.1.sta00591 1.1.sta00592 1.1.sta00593 1.1.sta00594 1.1.sta00595 1.1.sta00596 1.1.sta00597 1.1.sta00598 1.1.sta00599 1.1.sta01000 1.1.sta01001 1.1.sta01002 1.1.sta01003 1.1.sta01004 1.1.sta01005 1.1.sta01006 1.1.sta01007 1.1.sta01008 1.1.sta01009 1.1.sta01010 1.1.sta01011 1.1.sta01012 1.1.sta01013 1.1.sta01014 1.1.sta01015 1.1.sta01016 1.1.sta01017 1.1.sta01018 1.1.sta01019 1.1.sta01020 1.1.sta01021 1.1.sta01022 1.1.sta01023 1.1.sta01024 1.1.sta01025 1.1.sta01026 1.1.sta01027 1.1.sta01028 1.1.sta01029 1.1.sta01030 1.1.sta01031 1.1.sta01032 1.1.sta01033 1.1.sta01034 1.1.sta01035 1.1.sta01036 1.1.sta01037 1.1.sta01038 1.1.sta01039 1.1.sta01040 1.1.sta01041 1.1.sta01042 1.1.sta01043 1.1.sta01044 1.1.sta01045 1.1.sta01046 1.1.sta01047 1.1.sta01048 1.1.sta01049 1.1.sta01050 1.1.sta01051 1.1.sta01052 1.1.sta01053 1.1.sta01054 1.1.sta01055 1.1.sta01056 1.1.sta01057 1.1.sta01058 1.1.sta01059 1.1.sta01060 1.1.sta01061 1.1.sta01062 1.1.sta01063
Firmware N/A 10.1-ct-8x-__xtH-022-db8cfc6c 0.3-0
Machines ben-ota-2




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station: 1000000000 (   1 Gbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 1   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:        0 bps  Cx Max:        0 bps  All Cx:        0 bps
Upload Rate:       Cx Min: 275.902 Mbps  Cx Ave: 275.902 Mbps  Cx Max: 275.902 Mbps  All Cx: 275.902 Mbps
                                                                                     Total: 275.902 Mbps
Aggregated Rate:   Min:    275.902 Mbps  Avg:    275.902 Mbps  Max:    275.902 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station: 1000000000 (   1 Gbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 1   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:          0 B  Cx Max:          0 B  All Cx:          0 B
Upload Amount:     Cx Min:     2.054 GB  Cx Ave:     2.054 GB  Cx Max:     2.054 GB  All Cx:     2.054 GB
                                                                                     Total:      2.054 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station: 500000000 ( 500 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 2   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:        0 bps  Cx Max:        0 bps  All Cx:        0 bps
Upload Rate:       Cx Min: 140.944 Mbps  Cx Ave: 141.375 Mbps  Cx Max: 141.807 Mbps  All Cx: 282.751 Mbps
                                                                                     Total: 282.751 Mbps
Aggregated Rate:   Min:    140.944 Mbps  Avg:    141.375 Mbps  Max:    141.807 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station: 500000000 ( 500 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 2   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:          0 B  Cx Max:          0 B  All Cx:          0 B
Upload Amount:     Cx Min:   967.634 MB  Cx Ave:    968.43 MB  Cx Max:   969.226 MB  All Cx:     1.891 GB
                                                                                     Total:      1.891 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station: 200000000 ( 200 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 5   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:        0 bps  Cx Max:        0 bps  All Cx:        0 bps
Upload Rate:       Cx Min:  54.158 Mbps  Cx Ave:  55.122 Mbps  Cx Max:  55.709 Mbps  All Cx:  275.61 Mbps
                                                                                     Total:  275.61 Mbps
Aggregated Rate:   Min:     54.158 Mbps  Avg:     55.122 Mbps  Max:     55.709 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station: 200000000 ( 200 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 5   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:          0 B  Cx Max:          0 B  All Cx:          0 B
Upload Amount:     Cx Min:    381.03 MB  Cx Ave:   383.707 MB  Cx Max:   385.063 MB  All Cx:     1.874 GB
                                                                                     Total:      1.874 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station: 100000000 ( 100 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 10   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:        0 bps  Cx Max:        0 bps  All Cx:        0 bps
Upload Rate:       Cx Min:  23.411 Mbps  Cx Ave:  24.346 Mbps  Cx Max:  25.104 Mbps  All Cx: 243.462 Mbps
                                                                                     Total: 243.462 Mbps
Aggregated Rate:   Min:     23.411 Mbps  Avg:     24.346 Mbps  Max:     25.104 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station: 100000000 ( 100 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 10   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:          0 B  Cx Max:          0 B  All Cx:          0 B
Upload Amount:     Cx Min:   176.085 MB  Cx Ave:   177.479 MB  Cx Max:   178.903 MB  All Cx:     1.733 GB
                                                                                     Total:      1.733 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station:  50000000 (  50 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 20   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:        0 bps  Cx Max:        0 bps  All Cx:        0 bps
Upload Rate:       Cx Min:  15.019 Mbps  Cx Ave:  15.489 Mbps  Cx Max:  16.089 Mbps  All Cx: 309.771 Mbps
                                                                                     Total: 309.771 Mbps
Aggregated Rate:   Min:     15.019 Mbps  Avg:     15.489 Mbps  Max:     16.089 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station:  50000000 (  50 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 20   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:          0 B  Cx Max:          0 B  All Cx:          0 B
Upload Amount:     Cx Min:   108.993 MB  Cx Ave:   110.474 MB  Cx Max:   111.965 MB  All Cx:     2.158 GB
                                                                                     Total:      2.158 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station:  22222222 (22.222 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 45   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:        0 bps  Cx Max:        0 bps  All Cx:        0 bps
Upload Rate:       Cx Min:   5.236 Mbps  Cx Ave:   7.326 Mbps  Cx Max:   7.874 Mbps  All Cx: 329.666 Mbps
                                                                                     Total: 329.666 Mbps
Aggregated Rate:   Min:      5.236 Mbps  Avg:      7.326 Mbps  Max:      7.874 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station:  22222222 (22.222 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 45   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:          0 B  Cx Max:          0 B  All Cx:          0 B
Upload Amount:     Cx Min:    38.171 MB  Cx Ave:    53.019 MB  Cx Max:     54.97 MB  All Cx:      2.33 GB
                                                                                     Total:       2.33 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station:  16666666 (16.667 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 60   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:        0 bps  Cx Max:        0 bps  All Cx:        0 bps
Upload Rate:       Cx Min:   4.192 Mbps  Cx Ave:   5.657 Mbps  Cx Max:   6.142 Mbps  All Cx: 339.413 Mbps
                                                                                     Total: 339.413 Mbps
Aggregated Rate:   Min:      4.192 Mbps  Avg:      5.657 Mbps  Max:      6.142 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station:  16666666 (16.667 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 60   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:          0 B  Cx Max:          0 B  All Cx:          0 B
Upload Amount:     Cx Min:    27.662 MB  Cx Ave:    38.659 MB  Cx Max:    39.731 MB  All Cx:     2.265 GB
                                                                                     Total:      2.265 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station:  10000000 (  10 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 100   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:        0 bps  Cx Max:        0 bps  All Cx:        0 bps
Upload Rate:       Cx Min:   1.917 Mbps  Cx Ave:   4.004 Mbps  Cx Max:   5.384 Mbps  All Cx: 400.444 Mbps
                                                                                     Total: 400.444 Mbps
Aggregated Rate:   Min:      1.917 Mbps  Avg:      4.004 Mbps  Max:      5.384 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station:  10000000 (  10 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 100   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:          0 B  Cx Max:          0 B  All Cx:          0 B
Upload Amount:     Cx Min:    16.851 MB  Cx Ave:    29.504 MB  Cx Max:    37.986 MB  All Cx:     2.881 GB
                                                                                     Total:      2.881 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station:   7142857 (7.143 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 140   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:        0 bps  Cx Max:        0 bps  All Cx:        0 bps
Upload Rate:       Cx Min:   1.015 Mbps  Cx Ave:   2.895 Mbps  Cx Max:   5.619 Mbps  All Cx: 405.304 Mbps
                                                                                     Total: 405.304 Mbps
Aggregated Rate:   Min:      1.015 Mbps  Avg:      2.895 Mbps  Max:      5.619 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station:   7142857 (7.143 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 140   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:          0 B  Cx Max:          0 B  All Cx:          0 B
Upload Amount:     Cx Min:     8.023 MB  Cx Ave:    20.848 MB  Cx Max:    38.575 MB  All Cx:      2.85 GB
                                                                                     Total:       2.85 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station:   5555555 (5.556 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 180   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:        0 bps  Cx Max:        0 bps  All Cx:        0 bps
Upload Rate:       Cx Min: 751.992 Kbps  Cx Ave:   2.501 Mbps  Cx Max:   5.358 Mbps  All Cx: 450.122 Mbps
                                                                                     Total: 450.122 Mbps
Aggregated Rate:   Min:    751.992 Kbps  Avg:      2.501 Mbps  Max:      5.358 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station:   5555555 (5.556 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 180   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:          0 B  Cx Max:          0 B  All Cx:          0 B
Upload Amount:     Cx Min:     5.725 MB  Cx Ave:    18.267 MB  Cx Max:    37.442 MB  All Cx:     3.211 GB
                                                                                     Total:      3.211 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station:   4545454 (4.545 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 220   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:        0 bps  Cx Max:        0 bps  All Cx:        0 bps
Upload Rate:       Cx Min: 623.736 Kbps  Cx Ave:   2.486 Mbps  Cx Max:   4.229 Mbps  All Cx: 546.879 Mbps
                                                                                     Total: 546.879 Mbps
Aggregated Rate:   Min:    623.736 Kbps  Avg:      2.486 Mbps  Max:      4.229 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station:   4545454 (4.545 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 220   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:          0 B  Cx Max:          0 B  All Cx:          0 B
Upload Amount:     Cx Min:     5.848 MB  Cx Ave:    17.868 MB  Cx Max:    30.675 MB  All Cx:     3.839 GB
                                                                                     Total:      3.839 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station:   4385964 (4.386 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 228   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Rate:
Download Rate:     Cx Min:        0 bps  Cx Ave:        0 bps  Cx Max:        0 bps  All Cx:        0 bps
Upload Rate:       Cx Min: 536.346 Kbps  Cx Ave:   2.393 Mbps  Cx Max:   3.925 Mbps  All Cx:  545.52 Mbps
                                                                                     Total:  545.52 Mbps
Aggregated Rate:   Min:    536.346 Kbps  Avg:      2.393 Mbps  Max:      3.925 Mbps

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined bps, 60 second running average

Text Data for Graph




Requested Parameters:
Download Rate: Per station:         0 (    0 bps)  All:            0 (    0 bps)
Upload Rate:   Per station:   4385964 (4.386 Mbps)  All:   1000000000 (   1 Gbps)
                                                 Total:    1000000000 (   1 Gbps)
Station count: 228   Connections per station: 1   Payload (PDU) sizes: AUTO (AUTO)

Observed Amount:
Download Amount:   Cx Min:          0 B  Cx Ave:          0 B  Cx Max:          0 B  All Cx:          0 B
Upload Amount:     Cx Min:     5.173 MB  Cx Ave:    17.603 MB  Cx Max:    29.501 MB  All Cx:     3.919 GB
                                                                                     Total:      3.919 GB

This graph shows fairness.  On a fair system, each station should get about the same throughput.
In the download direction, it is mostly the device-under-test that is responsible for this behavior,
but in the upload direction, LANforge itself would be the source of most fairness issues
unless the device-under-test takes specific actions to ensure fairness.

Combined Received bytes, for entire 1 m run

Text Data for Graph



Maximum Stations Connected: 228
Stations NOT connected at this time: 0
Maximum Stations with IP Address: 228
Stations without IP at this time: 0

Station Maximums


RF stats give an indication of how well how congested is the RF environment. Channel activity is what the wifi radio reports as the busy-time for the RF environment. It is expected that this be near 100% when LANforge is running at max speed, but at lower speeds, this should be a lower percentage unless the RF environment is busy with other systems.

RF Stats for Stations

RX-Signal and Activity Data



Link rate stats give an indication of how well the rate-control is working. For rate-control, the 'RX' link rate corresponds to what the device-under-test is transmitting. If all of the stations are on the same radio, then the TX and RX encoding rates should be similar for all stations. If there is a definite pattern where some stations do not get good RX rate, then probably the device-under-test has rate-control problems. The TX rate is what LANforge is transmitting at.

Link Rate for Stations

TX/RX Link Rate Data


Key Performance Indicators CSV



Scan Results for SSIDs used in this test.

BSS 30:23:03:81:9c:28(on sta00000) -- associated
	TSF: 0 usec (0d, 00:00:00)
	freq: 5180
	beacon interval: 100 TUs
	capability: ESS (0x0001)
	signal: -26.00 dBm
	last seen: 164 ms ago
	Information elements from Probe Response frame:
	SSID: OpenWrt-5lo
	Supported rates: 6.0* 9.0 12.0* 18.0 24.0* 36.0 48.0 54.0 
	DS Parameter set: channel 36
	BSS Load:
		 * station count: 64
		 * channel utilisation: 237/255
		 * available admission capacity: 0 [*32us]
	Supported operating classes:
		 * current operating class: 128
	HT capabilities:
		Capabilities: 0x9ef
			RX LDPC
			HT20/HT40
			SM Power Save disabled
			RX HT20 SGI
			RX HT40 SGI
			TX STBC
			RX STBC 1-stream
			Max AMSDU length: 7935 bytes
			No DSSS/CCK HT40
		Maximum RX AMPDU length 65535 bytes (exponent: 0x003)
		Minimum RX AMPDU time spacing: 8 usec (0x06)
		HT TX/RX MCS rate indexes supported: 0-15
	HT operation:
		 * primary channel: 36
		 * secondary channel offset: above
		 * STA channel width: any
		 * RIFS: 0
		 * HT protection: no
		 * non-GF present: 1
		 * OBSS non-GF present: 0
		 * dual beacon: 0
		 * dual CTS protection: 0
		 * STBC beacon: 0
		 * L-SIG TXOP Prot: 0
		 * PCO active: 0
		 * PCO phase: 0
	Extended capabilities:
		 * Extended Channel Switching
		 * UTF-8 SSID
		 * Operating Mode Notification
		 * Max Number Of MSDUs In A-MSDU is unlimited
	VHT capabilities:
		VHT Capabilities (0x338819b2):
			Max MPDU length: 11454
			Supported Channel Width: neither 160 nor 80+80
			RX LDPC
			short GI (80 MHz)
			TX STBC
			SU Beamformer
			SU Beamformee
			MU Beamformer
			RX antenna pattern consistency
			TX antenna pattern consistency
		VHT RX MCS set:
			1 streams: MCS 0-9
			2 streams: MCS 0-9
			3 streams: not supported
			4 streams: not supported
			5 streams: not supported
			6 streams: not supported
			7 streams: not supported
			8 streams: not supported
		VHT RX highest supported: 0 Mbps
		VHT TX MCS set:
			1 streams: MCS 0-9
			2 streams: MCS 0-9
			3 streams: not supported
			4 streams: not supported
			5 streams: not supported
			6 streams: not supported
			7 streams: not supported
			8 streams: not supported
		VHT TX highest supported: 0 Mbps
	VHT operation:
		 * channel width: 1 (80 MHz)
		 * center freq segment 1: 42
		 * center freq segment 2: 0
		 * VHT basic MCS set: 0xfffc
	Transmit Power Envelope:
		 * Local Maximum Transmit Power For 20 MHz: 23 dBm
		 * Local Maximum Transmit Power For 40 MHz: 23 dBm
		 * Local Maximum Transmit Power For 80 MHz: 23 dBm
	WMM:	 * Parameter version 1
		 * u-APSD
		 * BE: CW 15-1023, AIFSN 3
		 * BK: CW 15-1023, AIFSN 7
		 * VI: CW 7-15, AIFSN 2, TXOP 3008 usec
		 * VO: CW 3-7, AIFSN 2, TXOP 1504 usec


BSS 30:23:03:81:9c:27(on sta00500) -- associated
	TSF: 0 usec (0d, 00:00:00)
	freq: 2462
	beacon interval: 100 TUs
	capability: ESS ShortPreamble ShortSlotTime (0x0421)
	signal: -16.00 dBm
	last seen: 7 ms ago
	Information elements from Probe Response frame:
	SSID: OpenWrt-2
	Supported rates: 1.0* 2.0* 5.5* 11.0* 6.0 9.0 12.0 18.0 
	DS Parameter set: channel 11
	ERP: <no flags>
	Extended supported rates: 24.0 36.0 48.0 54.0 
	BSS Load:
		 * station count: 100
		 * channel utilisation: 218/255
		 * available admission capacity: 0 [*32us]
	Supported operating classes:
		 * current operating class: 81
	HT capabilities:
		Capabilities: 0x19ed
			RX LDPC
			HT20
			SM Power Save disabled
			RX HT20 SGI
			RX HT40 SGI
			TX STBC
			RX STBC 1-stream
			Max AMSDU length: 7935 bytes
			DSSS/CCK HT40
		Maximum RX AMPDU length 65535 bytes (exponent: 0x003)
		Minimum RX AMPDU time spacing: 8 usec (0x06)
		HT TX/RX MCS rate indexes supported: 0-15
	HT operation:
		 * primary channel: 11
		 * secondary channel offset: no secondary
		 * STA channel width: 20 MHz
		 * RIFS: 0
		 * HT protection: no
		 * non-GF present: 1
		 * OBSS non-GF present: 0
		 * dual beacon: 0
		 * dual CTS protection: 0
		 * STBC beacon: 0
		 * L-SIG TXOP Prot: 0
		 * PCO active: 0
		 * PCO phase: 0
	Extended capabilities:
		 * Extended Channel Switching
		 * UTF-8 SSID
		 * Operating Mode Notification
	WMM:	 * Parameter version 1
		 * u-APSD
		 * BE: CW 15-1023, AIFSN 3
		 * BK: CW 15-1023, AIFSN 7
		 * VI: CW 7-15, AIFSN 2, TXOP 3008 usec
		 * VO: CW 3-7, AIFSN 2, TXOP 1504 usec


BSS 32:23:03:81:9c:29(on sta01000) -- associated
	TSF: 0 usec (0d, 00:00:00)
	freq: 5745
	beacon interval: 100 TUs
	capability: ESS (0x0001)
	signal: -19.00 dBm
	last seen: 175 ms ago
	Information elements from Probe Response frame:
	SSID: OpenWrt-5hi
	Supported rates: 6.0* 9.0 12.0* 18.0 24.0* 36.0 48.0 54.0 
	DS Parameter set: channel 149
	BSS Load:
		 * station count: 64
		 * channel utilisation: 161/255
		 * available admission capacity: 0 [*32us]
	Supported operating classes:
		 * current operating class: 128
	HT capabilities:
		Capabilities: 0x9ef
			RX LDPC
			HT20/HT40
			SM Power Save disabled
			RX HT20 SGI
			RX HT40 SGI
			TX STBC
			RX STBC 1-stream
			Max AMSDU length: 7935 bytes
			No DSSS/CCK HT40
		Maximum RX AMPDU length 65535 bytes (exponent: 0x003)
		Minimum RX AMPDU time spacing: 8 usec (0x06)
		HT TX/RX MCS rate indexes supported: 0-15
	HT operation:
		 * primary channel: 149
		 * secondary channel offset: above
		 * STA channel width: any
		 * RIFS: 0
		 * HT protection: no
		 * non-GF present: 1
		 * OBSS non-GF present: 0
		 * dual beacon: 0
		 * dual CTS protection: 0
		 * STBC beacon: 0
		 * L-SIG TXOP Prot: 0
		 * PCO active: 0
		 * PCO phase: 0
	Extended capabilities:
		 * Extended Channel Switching
		 * UTF-8 SSID
		 * Operating Mode Notification
		 * Max Number Of MSDUs In A-MSDU is unlimited
	VHT capabilities:
		VHT Capabilities (0x338819b2):
			Max MPDU length: 11454
			Supported Channel Width: neither 160 nor 80+80
			RX LDPC
			short GI (80 MHz)
			TX STBC
			SU Beamformer
			SU Beamformee
			MU Beamformer
			RX antenna pattern consistency
			TX antenna pattern consistency
		VHT RX MCS set:
			1 streams: MCS 0-9
			2 streams: MCS 0-9
			3 streams: not supported
			4 streams: not supported
			5 streams: not supported
			6 streams: not supported
			7 streams: not supported
			8 streams: not supported
		VHT RX highest supported: 0 Mbps
		VHT TX MCS set:
			1 streams: MCS 0-9
			2 streams: MCS 0-9
			3 streams: not supported
			4 streams: not supported
			5 streams: not supported
			6 streams: not supported
			7 streams: not supported
			8 streams: not supported
		VHT TX highest supported: 0 Mbps
	VHT operation:
		 * channel width: 1 (80 MHz)
		 * center freq segment 1: 155
		 * center freq segment 2: 0
		 * VHT basic MCS set: 0xfffc
	Transmit Power Envelope:
		 * Local Maximum Transmit Power For 20 MHz: 30 dBm
		 * Local Maximum Transmit Power For 40 MHz: 30 dBm
		 * Local Maximum Transmit Power For 80 MHz: 30 dBm
	WMM:	 * Parameter version 1
		 * u-APSD
		 * BE: CW 15-1023, AIFSN 3
		 * BK: CW 15-1023, AIFSN 7
		 * VI: CW 7-15, AIFSN 2, TXOP 3008 usec
		 * VO: CW 3-7, AIFSN 2, TXOP 1504 usec



Generated by Candela Technologies LANforge network testing tool.
www.candelatech.com