PDF Report
Objective
The Candela WiFi Capacity test is designed to measure performance of an
Access Point when handling different amounts of WiFi Stations. The test
allows the user to increase the number of stations in user defined steps
for each test iteration and measure the per station and the overall
throughput for each trial. Along with throughput other measurements made
are client connection times, Fairness, % packet loss, DHCP times and
more. The expected behavior is for the AP to be able to handle several
stations (within the limitations of the AP specs) and make sure all
stations get a fair amount of airtime both in the upstream and
downstream. An AP that scales well will not show a significant over-all
throughput decrease as more stations are added.
Realtime Graph shows summary download and upload RX Goodput rate of connections
created by this test. Goodput does not include Ethernet, IP, UDP/TCP header overhead.
CSV Data for Realtime Throughput
Total Megabits-per-second transferred. This only counts the protocol payload, so it will not count the Ethernet, IP, UDP, TCP or other header overhead. A well behaving system will show about the same rate as stations increase. If the rate decreases significantly as stations increase, then it is not scaling we
CSV Data for Total Mbps Received vs Number of Stations Active
Text Data for Mbps Upload/Download
Protocol-Data-Units received. For TCP, this does not mean much, but for UDP connections, this correlates to packet size. If the PDU size is larger than what fits into a single frame, then the network stack will segment it accordingly. A well behaving system will show about the same rate as stations increase. If the rate decreases significantly as stations increase, then it is not scaling well.
CSV Data for Total PDU/s Received vs Number of Stations Active
Text Data for Pps Upload/Download
Station disconnect stats. These will be only for the last iteration.
If the 'Clear Reset Counters' option is selected,
the stats are cleared after the initial association. Any re-connects reported
indicate a potential stability issue.
Can be used for long-term stability testing in cases where you bring up all
stations in one iteration and then run the test for a longer duration.
CSV Data for Port Reset Totals
Station connect time is calculated from the initial Authenticate message through the completion of Open or RSN association/authentication.
CSV Data for Station Connect Times
Wifi-Capacity Test requested values
Station Increment:
|
1,2
|
Loop Iterations:
|
Single (1)
|
Duration:
|
1 min (1 m)
|
Layer 4-7 Endpoint:
|
NONE
|
MSS
|
AUTO
|
Total Download Rate:
|
Zero (0 bps)
|
Total Upload Rate:
|
OC48 (2.488 Gbps)
|
Protocol:
|
UDP-IPv4
|
Payload Size:
|
AUTO
|
Socket buffer size:
|
OS Default
|
IP ToS:
|
Best Effort (0)
|
Multi-Conn:
|
AUTO
|
Set Bursty Minimum Speed:
|
Burst Mode Disabled (-1)
|
Percentage TCP Rate:
|
10% (10%)
|
Randomize Rates
|
true
|
Leave Ports Up
|
false
|
Settle Time:
|
5 sec (5 s)
|
Rpt Timer:
|
fast (1 s)
|
Show-Per-Iteration-Charts
|
true
|
Show-Per-Loop-Totals
|
true
|
Hunt-Lower-Rates
|
false
|
Show Events
|
true
|
Clear Reset Counters
|
false
|
CSV Reporting Dir
|
- not selected -
|
Build Date
|
Mon Oct 31 08:38:27 PDT 2022
|
Build Version
|
5.4.6
|
Git Version
|
cc4e25f4013da941141702ada4944bd17adb5c5c
|
Ports
|
1.1.eth1 1.1.sta1400 1.1.sta14009 1.1.sta1401 1.1.sta14010
1.1.sta14011 1.1.sta14012 1.1.sta14013 1.1.sta14014 1.1.sta14015
1.1.sta14016 1.1.sta14017 1.1.sta1402 1.1.sta1403 1.1.sta1404
1.1.sta1405 1.1.sta1406 1.1.sta1407 1.1.sta1408 1.1.wlan14
|
Firmware
|
0. 6-1
|
Machines
|
ct523c-0b29
|
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
2488000000 (2.488 Gbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
1
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
956.487 Mbps
|
Cx Ave:
|
956.487 Mbps
|
Cx Max:
|
956.487 Mbps
|
All Cx:
|
956.487 Mbps
|
Total:
|
956.487 Mbps
|
Aggregated Rate:
|
Min:
|
956.487 Mbps
|
Avg:
|
956.487 Mbps
|
Max:
|
956.487 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
2488000000 (2.488 Gbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
1
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
6.585 GB
|
Cx Ave:
|
6.585 GB
|
Cx Max:
|
6.585 GB
|
All Cx:
|
6.585 GB
|
Total:
|
6.585 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
1244000000 (1.244 Gbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
2
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
367.081 Mbps
|
Cx Ave:
|
376.042 Mbps
|
Cx Max:
|
385.002 Mbps
|
All Cx:
|
752.083 Mbps
|
Total:
|
752.083 Mbps
|
Aggregated Rate:
|
Min:
|
367.081 Mbps
|
Avg:
|
376.042 Mbps
|
Max:
|
385.002 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
1244000000 (1.244 Gbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
2
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
3.175 GB
|
Cx Ave:
|
3.19 GB
|
Cx Max:
|
3.205 GB
|
All Cx:
|
6.38 GB
|
Total:
|
6.38 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
829333333 (829.333 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
3
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
300.213 Mbps
|
Cx Ave:
|
313.14 Mbps
|
Cx Max:
|
325.252 Mbps
|
All Cx:
|
939.419 Mbps
|
Total:
|
939.419 Mbps
|
Aggregated Rate:
|
Min:
|
300.213 Mbps
|
Avg:
|
313.14 Mbps
|
Max:
|
325.252 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
829333333 (829.333 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
3
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
2.091 GB
|
Cx Ave:
|
2.109 GB
|
Cx Max:
|
2.119 GB
|
All Cx:
|
6.328 GB
|
Total:
|
6.328 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
622000000 ( 622 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
4
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
233.645 Mbps
|
Cx Ave:
|
239.32 Mbps
|
Cx Max:
|
241.214 Mbps
|
All Cx:
|
957.282 Mbps
|
Total:
|
957.282 Mbps
|
Aggregated Rate:
|
Min:
|
233.645 Mbps
|
Avg:
|
239.32 Mbps
|
Max:
|
241.214 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
622000000 ( 622 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
4
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
1.596 GB
|
Cx Ave:
|
1.622 GB
|
Cx Max:
|
1.641 GB
|
All Cx:
|
6.487 GB
|
Total:
|
6.487 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
497600000 (497.6 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
5
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
157.674 Mbps
|
Cx Ave:
|
174.154 Mbps
|
Cx Max:
|
181.942 Mbps
|
All Cx:
|
870.771 Mbps
|
Total:
|
870.771 Mbps
|
Aggregated Rate:
|
Min:
|
157.674 Mbps
|
Avg:
|
174.154 Mbps
|
Max:
|
181.942 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
497600000 (497.6 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
5
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
1.229 GB
|
Cx Ave:
|
1.259 GB
|
Cx Max:
|
1.287 GB
|
All Cx:
|
6.296 GB
|
Total:
|
6.296 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
414666666 (414.667 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
6
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
148.996 Mbps
|
Cx Ave:
|
156.466 Mbps
|
Cx Max:
|
163.621 Mbps
|
All Cx:
|
938.795 Mbps
|
Total:
|
938.795 Mbps
|
Aggregated Rate:
|
Min:
|
148.996 Mbps
|
Avg:
|
156.466 Mbps
|
Max:
|
163.621 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
414666666 (414.667 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
6
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
1.039 GB
|
Cx Ave:
|
1.052 GB
|
Cx Max:
|
1.069 GB
|
All Cx:
|
6.313 GB
|
Total:
|
6.313 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
355428571 (355.429 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
7
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
80.143 Mbps
|
Cx Ave:
|
109.873 Mbps
|
Cx Max:
|
126.948 Mbps
|
All Cx:
|
769.108 Mbps
|
Total:
|
769.108 Mbps
|
Aggregated Rate:
|
Min:
|
80.143 Mbps
|
Avg:
|
109.873 Mbps
|
Max:
|
126.948 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
355428571 (355.429 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
7
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
899.159 MB
|
Cx Ave:
|
927.3 MB
|
Cx Max:
|
950.23 MB
|
All Cx:
|
6.339 GB
|
Total:
|
6.339 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
311000000 ( 311 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
8
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
112.157 Mbps
|
Cx Ave:
|
119.671 Mbps
|
Cx Max:
|
121.305 Mbps
|
All Cx:
|
957.365 Mbps
|
Total:
|
957.365 Mbps
|
Aggregated Rate:
|
Min:
|
112.157 Mbps
|
Avg:
|
119.671 Mbps
|
Max:
|
121.305 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
311000000 ( 311 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
8
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
813.659 MB
|
Cx Ave:
|
832.66 MB
|
Cx Max:
|
844.206 MB
|
All Cx:
|
6.505 GB
|
Total:
|
6.505 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
276444444 (276.444 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
9
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
96.956 Mbps
|
Cx Ave:
|
106.352 Mbps
|
Cx Max:
|
115.024 Mbps
|
All Cx:
|
957.164 Mbps
|
Total:
|
957.164 Mbps
|
Aggregated Rate:
|
Min:
|
96.956 Mbps
|
Avg:
|
106.352 Mbps
|
Max:
|
115.024 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
276444444 (276.444 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
9
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
620.638 MB
|
Cx Ave:
|
663.898 MB
|
Cx Max:
|
704.447 MB
|
All Cx:
|
5.835 GB
|
Total:
|
5.835 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
248800000 (248.8 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
10
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
62.885 Mbps
|
Cx Ave:
|
88.508 Mbps
|
Cx Max:
|
101.252 Mbps
|
All Cx:
|
885.077 Mbps
|
Total:
|
885.077 Mbps
|
Aggregated Rate:
|
Min:
|
62.885 Mbps
|
Avg:
|
88.508 Mbps
|
Max:
|
101.252 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
248800000 (248.8 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
10
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
575.493 MB
|
Cx Ave:
|
598.965 MB
|
Cx Max:
|
630.17 MB
|
All Cx:
|
5.849 GB
|
Total:
|
5.849 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
226181818 (226.182 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
11
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
42.034 Mbps
|
Cx Ave:
|
56.604 Mbps
|
Cx Max:
|
76.114 Mbps
|
All Cx:
|
622.646 Mbps
|
Total:
|
622.646 Mbps
|
Aggregated Rate:
|
Min:
|
42.034 Mbps
|
Avg:
|
56.604 Mbps
|
Max:
|
76.114 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
226181818 (226.182 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
11
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
570.171 MB
|
Cx Ave:
|
590.336 MB
|
Cx Max:
|
615.976 MB
|
All Cx:
|
6.342 GB
|
Total:
|
6.342 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
207333333 (207.333 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
12
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
46.815 Mbps
|
Cx Ave:
|
77.164 Mbps
|
Cx Max:
|
83.572 Mbps
|
All Cx:
|
925.967 Mbps
|
Total:
|
925.967 Mbps
|
Aggregated Rate:
|
Min:
|
46.815 Mbps
|
Avg:
|
77.164 Mbps
|
Max:
|
83.572 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
207333333 (207.333 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
12
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
529.195 MB
|
Cx Ave:
|
557.374 MB
|
Cx Max:
|
578.352 MB
|
All Cx:
|
6.532 GB
|
Total:
|
6.532 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
191384615 (191.385 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
13
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
60.29 Mbps
|
Cx Ave:
|
71.702 Mbps
|
Cx Max:
|
76.574 Mbps
|
All Cx:
|
932.124 Mbps
|
Total:
|
932.124 Mbps
|
Aggregated Rate:
|
Min:
|
60.29 Mbps
|
Avg:
|
71.702 Mbps
|
Max:
|
76.574 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
191384615 (191.385 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
13
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
466.981 MB
|
Cx Ave:
|
509.392 MB
|
Cx Max:
|
533.55 MB
|
All Cx:
|
6.467 GB
|
Total:
|
6.467 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
177714285 (177.714 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
14
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
65.01 Mbps
|
Cx Ave:
|
68.359 Mbps
|
Cx Max:
|
70.814 Mbps
|
All Cx:
|
957.03 Mbps
|
Total:
|
957.03 Mbps
|
Aggregated Rate:
|
Min:
|
65.01 Mbps
|
Avg:
|
68.359 Mbps
|
Max:
|
70.814 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
177714285 (177.714 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
14
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
447.454 MB
|
Cx Ave:
|
479.241 MB
|
Cx Max:
|
495.526 MB
|
All Cx:
|
6.552 GB
|
Total:
|
6.552 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
165866666 (165.867 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
15
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
49.932 Mbps
|
Cx Ave:
|
63.8 Mbps
|
Cx Max:
|
68.99 Mbps
|
All Cx:
|
957.003 Mbps
|
Total:
|
957.003 Mbps
|
Aggregated Rate:
|
Min:
|
49.932 Mbps
|
Avg:
|
63.8 Mbps
|
Max:
|
68.99 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
165866666 (165.867 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
15
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
448.004 MB
|
Cx Ave:
|
460.18 MB
|
Cx Max:
|
470.157 MB
|
All Cx:
|
6.741 GB
|
Total:
|
6.741 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
155500000 (155.5 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
16
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
32.778 Mbps
|
Cx Ave:
|
59.83 Mbps
|
Cx Max:
|
64.023 Mbps
|
All Cx:
|
957.279 Mbps
|
Total:
|
957.279 Mbps
|
Aggregated Rate:
|
Min:
|
32.778 Mbps
|
Avg:
|
59.83 Mbps
|
Max:
|
64.023 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
155500000 (155.5 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
16
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
412.755 MB
|
Cx Ave:
|
431.372 MB
|
Cx Max:
|
441.183 MB
|
All Cx:
|
6.74 GB
|
Total:
|
6.74 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
146352941 (146.353 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
17
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
31.287 Mbps
|
Cx Ave:
|
56.311 Mbps
|
Cx Max:
|
67.16 Mbps
|
All Cx:
|
957.293 Mbps
|
Total:
|
957.293 Mbps
|
Aggregated Rate:
|
Min:
|
31.287 Mbps
|
Avg:
|
56.311 Mbps
|
Max:
|
67.16 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
146352941 (146.353 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
17
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
257.029 MB
|
Cx Ave:
|
406.046 MB
|
Cx Max:
|
444.457 MB
|
All Cx:
|
6.741 GB
|
Total:
|
6.741 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
138222222 (138.222 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
18
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
43.896 Mbps
|
Cx Ave:
|
53.177 Mbps
|
Cx Max:
|
58.428 Mbps
|
All Cx:
|
957.187 Mbps
|
Total:
|
957.187 Mbps
|
Aggregated Rate:
|
Min:
|
43.896 Mbps
|
Avg:
|
53.177 Mbps
|
Max:
|
58.428 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
138222222 (138.222 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
18
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
358.215 MB
|
Cx Ave:
|
383.611 MB
|
Cx Max:
|
394.501 MB
|
All Cx:
|
6.743 GB
|
Total:
|
6.743 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
130947368 (130.947 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
19
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Rate:
Download Rate:
|
Cx Min:
|
0 bps
|
Cx Ave:
|
0 bps
|
Cx Max:
|
0 bps
|
All Cx:
|
0 bps
|
Upload Rate:
|
Cx Min:
|
28.858 Mbps
|
Cx Ave:
|
50.371 Mbps
|
Cx Max:
|
59.236 Mbps
|
All Cx:
|
957.051 Mbps
|
Total:
|
957.051 Mbps
|
Aggregated Rate:
|
Min:
|
28.858 Mbps
|
Avg:
|
50.371 Mbps
|
Max:
|
59.236 Mbps
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Mbps, 60 second running average
Text Data for Graph
Requested Parameters:
Download Rate:
|
Per station:
|
0 ( 0 bps)
|
All:
|
0 ( 0 bps)
|
Upload Rate:
|
Per station:
|
130947368 (130.947 Mbps)
|
All:
|
2488000000 (2.488 Gbps)
|
Total:
|
2488000000 (2.488 Gbps)
|
Station count:
|
19
|
Connections per station:
|
1
|
Payload (PDU) sizes:
|
AUTO (AUTO)
|
Observed Amount:
Download Amount:
|
Cx Min:
|
0 B
|
Cx Ave:
|
0 B
|
Cx Max:
|
0 B
|
All Cx:
|
0 B
|
Upload Amount:
|
Cx Min:
|
336.389 MB
|
Cx Ave:
|
361.199 MB
|
Cx Max:
|
382.489 MB
|
All Cx:
|
6.702 GB
|
Total:
|
6.702 GB
|
This graph shows fairness. On a fair system, each station should get
about the same throughput.
In the download direction, it is mostly
the device-under-test that is responsible for this behavior,
but in
the upload direction, LANforge itself would be the source of most
fairness issues
unless the device-under-test takes specific actions
to ensure fairness.
CSV Data for Combined Received Megabytes, for entire 1 m run
Text Data for Graph
Maximum Stations Connected: 19
Stations NOT connected at this time: 0
Maximum Stations with IP Address: 19
Stations without IP at this time: 0
CSV Data for Station Maximums
RF stats give an indication of how well how congested is the RF environment. Channel activity is what the wifi radio reports as the busy-time for the RF environment. It is expected that this be near 100% when LANforge is running at max speed, but at lower speeds, this should be a lower percentage unless the RF environment is busy with other systems.
CSV Data for RF Stats for Stations
RX-Signal and Activity Data
Link rate stats give an indication of how well the rate-control is working. For rate-control, the 'RX' link rate corresponds to what the device-under-test is transmitting. If all of the stations are on the same radio, then the TX and RX encoding rates should be similar for all stations. If there is a definite pattern where some stations do not get good RX rate, then probably the device-under-test has rate-control problems. The TX rate is what LANforge is transmitting at.
CSV Data for Link Rate for Stations
TX/RX Link Rate Data
Key Performance Indicators CSV
Scan Results for SSIDs used in this test.
BSS 00:0a:52:38:84:1b(on wlan14)
last seen: 4759.104s [boottime]
TSF: 4772662102 usec (0d, 01:19:32)
freq: 5955
beacon interval: 240 TUs
capability: ESS Privacy (0x0011)
signal: -75.00 dBm
last seen: 11 ms ago
Information elements from Probe Response frame:
SSID: ben-7916-6g
Supported rates: 6.0* 9.0 12.0* 18.0 24.0* 36.0 48.0 54.0
DS Parameter set: channel 1
RSN: * Version: 1
* Group cipher: CCMP
* Pairwise ciphers: CCMP
* Authentication suites: SAE
* Capabilities: 16-PTKSA-RC 1-GTKSA-RC MFP-required MFP-capable (0x00cc)
Supported operating classes:
* current operating class: 134
Extended capabilities:
* Extended Channel Switching
* Multiple BSSID
* SSID List
* Operating Mode Notification
* 6
Transmit Power Envelope:
Transmit Power Envelope:
HE capabilities:
HE MAC Capabilities (0x00051a081044):
+HTC HE Supported
TWT Responder
BSR
OM Control
Maximum A-MPDU Length Exponent: 3
BQR
A-MSDU in A-MPDU
OM Control UL MU Data Disable RX
HE PHY Capabilities: (0x0c20ce126f09afc8000c00):
HE40/HE80/5GHz
HE160/5GHz
LDPC Coding in Payload
NDP with 4x HE-LTF and 3.2us GI
STBC Tx <= 80MHz
STBC Rx <= 80MHz
Full Bandwidth UL MU-MIMO
Partial Bandwidth UL MU-MIMO
DCM Max Constellation: 2
DCM Max Constellation Rx: 2
SU Beamformee
MU Beamformer
Beamformee STS <= 80Mhz: 3
Beamformee STS > 80Mhz: 3
Sounding Dimensions <= 80Mhz: 1
Sounding Dimensions > 80Mhz: 1
Codebook Size SU Feedback
Codebook Size MU Feedback
Triggered SU Beamforming Feedback
Triggered MU Beamforming Feedback
Partial Bandwidth Extended Range
PPE Threshold Present
Max NC: 1
STBC Tx > 80MHz
STBC Rx > 80MHz
TX 1024-QAM
RX 1024-QAM
HE RX MCS and NSS set <= 80 MHz
1 streams: MCS 0-11
2 streams: MCS 0-11
3 streams: not supported
4 streams: not supported
5 streams: not supported
6 streams: not supported
7 streams: not supported
8 streams: not supported
HE TX MCS and NSS set <= 80 MHz
1 streams: MCS 0-11
2 streams: MCS 0-11
3 streams: not supported
4 streams: not supported
5 streams: not supported
6 streams: not supported
7 streams: not supported
8 streams: not supported
HE RX MCS and NSS set 160 MHz
1 streams: MCS 0-11
2 streams: MCS 0-11
3 streams: not supported
4 streams: not supported
5 streams: not supported
6 streams: not supported
7 streams: not supported
8 streams: not supported
HE TX MCS and NSS set 160 MHz
1 streams: MCS 0-11
2 streams: MCS 0-11
3 streams: not supported
4 streams: not supported
5 streams: not supported
6 streams: not supported
7 streams: not supported
8 streams: not supported
PPE Threshold 0x39 0x1c 0xc7 0x71 0x1c 0x07
WMM: * Parameter version 1
* BE: CW 15-1023, AIFSN 3
* BK: CW 15-1023, AIFSN 7
* VI: CW 7-15, AIFSN 2, TXOP 3008 usec
* VO: CW 3-7, AIFSN 2, TXOP 1504 usec
META Information for Report for: Wifi Capacity Test