You can tell the LANforge server to save data to a directory locally on the management machine, and you can configure your workstation running the the LANforge GUI to save data to a local desktop folder. First, find the Reporting Manager dialog by in the Reporting menu, and select Report Manager the client.
Collecting data on your local workstation is very convenient if you can leave the GUI running for the duration of your test scenario. The format of the data here should be similar to the format of the data saved to the server directory. The folders for collecting data are relative to the folder you start your GUI from. If you type in lf_data that probably means C:\Users\mumble\AppData\Local\LANforge-GUI\lf_data. You probably want to put in a fully qualified path thats more intuitive, like C:\Users\mumble\Documents\lf_data.
The Report Generator uses the local data files. In that dialog shows the Report Input Directory field is a local folder where the CSV files collect. The Save Reports to Directory field is where HTMl and PDF files should collect.
If your test scenario runs longer than your GUI can be up, you can configure the LANforge server to collect the data. The directory is relative to the /home/lanforge directory, so if you enter lf_data, you would find the CSV files in /home/lanforge/lf_data.
You can take a look at the data files easily. Here is a server data collection directory:
And using a utility like notepad, vi, more or less you can look at the file contents:
Importing the file into a spreadsheet like LibreOffice Calc is simple:
You only need to separate on comma (,) |
Libre Office does not have a builtin formula to do this, but it has been discussed here. And the solution is a formula that looks like this:
=(A2/86400)+25569and then you format the column as Date.
There are a number of ways to collect an dort the data with shell utilities. The first utility to consider is cut, then awk. The first column of the endpoint file we are going to read is the timestamp, the 14th is the rx bytes.
$ head -n2 c201-A_1488414451.csv | cut -d, -f1 TimeStamp 1488414454125 $ date -d @1488414454125 Mon Dec 23 19:28:45 PST 49135
$ head -n2 c201-A_1488414451.csv | (while IFS=, read -a L; do echo ${L[13]}; done) rx_bytes 33847640064
$ head -n2 c201-A_1488414451.csv | cut -d, -f14 rx_bytes 33847640064
$ head -n2 c201-A_1488414451.csv | awk -F, '{print $14}' rx_bytes 33847640064 head -n2 c201-A_1488414451.csv | awk -F, '{print $1 "\t" $14}' TimeStamp rx_bytes 1488414454125 33847640064
It is a lot easier to do math with a perl script than a bash or an awk script. You can pipe things into perl or perl will read the last argument of the -ne switches as an input file.
$ head -n2 c201-A_1488414451.csv \ | perl -ne '@v=split(/,/,$_); print "$v[0]\t$v[13]\n";' TimeStamp rx_bytes 1488414454125 33847640064 perl -ne 'BEGIN{$tt=0;@tstamps=();@rxb=();} \ {@v=split(/,/,$_); push(@tstamps, $v[0]); push(@rxb, $v[13]);} \ END{$dt=$tstamps[$#tstamps] - $tstamps[1]; $db=$rxb[$#rxb] - $rxb[1]; \ print "Time: $dt, Total:$db\n";}' \ c201-A_1488414451.csv Time: 18959363, Total:1213399040
Not everthing you do in perl is going to be a one-liner. Here's an example of the same script as a more properly formatted perl file:
#!/usr/bin/perl my $tt=0; my @tstamps=(); my @rxb=(); while(<>) { @v = split(/,/, $_); push(@tstamps, $v[0]); push(@rxb, $v[13]); } $dt = $tstamps[$#tstamps] - $tstamps[1]; $db = $rxb[$#rxb] - $rxb[1]; print "Time: $dt, Total:$db\n";