Hope you have checked my previous blog on Jperf
Please check the above jperf part 1 blog as this one just an extension of the previous blog. In previous blog we have tested Iperf with single data stream(class) where we have assigned 70% bandwidth of the CIR rate to “child1” class. With this configuration we have seen that we were getting around 2 Mbps resultant bandwidth and same was verified with Iperf/Jperf .This was because we did not have any data in the default class and the results were as per the QOS configurations on the router.
However since Iperf/Jperf is not that well known tool so you would find some engineers may not rely on its results and they think that Iperf/Jperf is not an ideal method to test the bandwidth between two points, like I used to do 🙂 . It would not be wise to comment anything on this so we would just focus on test cases and verify our expectations with the results we see in the Iperf/Jperf tool.
Test case : The setup is exactly same as in my old post, the only change is in the number of data streams. In previous setup we just had one data stream landing in “child1” class which was being given all the available bandwidth i.e. 2Mbps since there was no other traffic in default class to claim the bandwidth. We are changing this situation by generating another stream for default class as well. So now we have two data streams. One is the HTTP packet stream which was classified in class “child1” and other one is not classified anywhere hence will land in default class. Now, since both the classes have the data hence the “child1” class would only get the assigned bandwidth. Below should be the available and allocated bandwidth in our situation.
For child1 class : 1.4 Mbps
For Default class : 0.6 Mbps
Above is the result that we expect as per QOS rules, now lets see if Iperf/Jperf also gives similar outputs.
Following is the configuration on Server :
So we have two data streams here, Stream 1 is our old stream which gets classified in “child1” class and should get around 1.4 Mbps bandwidth. The Stream 2 ends up in default class and should get around 0.6 Mbps.
Lets see the configuration on Client now :
Now we have both the stream setup we will see how much bandwidth we are getting for each of the streams.
So this is exactly what we have expected. Which means Iperf can be relied upon in these kind of setups. We will keep testing this to see if it fails in any situation. See the following output from the Cisco router :
C2921-2#show policy-map int gigabitEthernet 0/0 GigabitEthernet0/0 Service-policy output: Parent Class-map: class-default (match-any) 376597 packets, 522134755 bytes 30 second offered rate 1997000 bps, drop rate 0000 bps Match: any Queueing queue limit 64 packets (queue depth/total drops/no-buffer drops) 31/23/0 (pkts output/bytes output) 376363/521838277 shape (average) cir 2000000, bc 8000, be 8000 target shape rate 2000000 Service-policy : TEST Class-map: child1 (match-all) 288257 packets, 409102610 bytes 30 second offered rate 1397000 bps, drop rate 0000 bps Match: access-group name TEST Queueing queue limit 64 packets (queue depth/total drops/no-buffer drops) 16/23/0 (pkts output/bytes output) 288234/409072332 bandwidth 70% (1400 kbps) Class-map: child2 (match-all) 0 packets, 0 bytes 30 second offered rate 0000 bps Match: none Class-map: class-default (match-any) 88127 packets, 112765484 bytes 30 second offered rate 599000 bps, drop rate 0000 bps Match: any queue limit 64 packets (queue depth/total drops/no-buffer drops) 13/0/0 (pkts output/bytes output) 88127/112765484
Thanks for visiting my page. Please share your thoughts in comment section.
Stay tuned for more.