I downloaded four files and while they were running I had a close look at top and iftop to monitor the CPU usage and the bandwidth usage between the client/server (the connection between eatmonkey and the aria2 XML-RPC server running on the localhost interface).
I had unexpected results and was surprised by the CPU usage. It is very high currently which means I have a new task for the next milestone, getting the CPU footprint low. The bandwidth comes without surprises, but since the milestone will target performance where possible I will fine down the number of requests made to the server. This problem is also noticeable in the GUI in that it tends to micro-freeze during the updates of each download. So the more active downloads will be running the more the client will be freezing.
Some results as it will speak more than words:
Number of active downloads | Reception | Emission | CPU% |
---|---|---|---|
4 downloads | 144Kbps | 18Kbps | 30% |
3 downloads | 108Kbps | 14Kbps | 26% |
2 downloads | 73Kbps | 11Kbps | 18% |
I will start by running benchmarks on the code itself, and thanks to Ruby there is built-in support for Benchmarking and Profiling. It comes with at least three different useful modules: benchmark, profile and profiler. The first measures the time that the code necessitated to be executed on the system. It is useful to measure different kind of loops like for, while or do...while, or for example to see if a string is best to be compared through a dummy compare function or via a compiled regular expression. The second simply needs to be included at the top of a Ruby script and it will print a summary of the time passed within each method/function call. The third does the same except it is possible to run the Profiler around distinctive blocks of code. So much for the presentation, below are some samples.
File
benchmark.rb
:#!/usr/bin/ruby -w require "benchmark" require "pp" integers = (1..10000).to_a pp Benchmark.measure { integers.map { |i| i * i } } Benchmark.bm(10) do |b| b.report("simple") { 50000.times { 1 + 2 } } b.report("complex") { 50000.times { 1 + 2 - 6 + 5 * 4 / 2 + 4 } } b.report("stupid") { 50000.times { "1".to_i + "3".to_i * "4".to_i - "2".to_i } } end words = IO.readlines("/usr/share/dict/words") Benchmark.bm(10) do |b| b.report("include") { words.each { |w| next if w.include?("abe") } } b.report("regexp") { words.each { |w| next if w =~ /abe/ } } end
File
profile.rb
:#!/usr/bin/ruby -w require "profile" def factorial(n) n > 1 ? n * factorial(n - 1) : 1; end factorial(627)
File
profiler.rb
:#!/usr/bin/ruby -w require "profiler" def factorial(n) (2..n).to_a.inject(1) { |product, i| product * i } end Profiler__.start_profile factorial(627) Profiler__.stop_profile Profiler__.print_profile($stdout)Update: The profiling showed that during a status request 65% of the time is consumed by the XML parser. The REXML class is written 100% in Ruby, and that gives a good hint that the same request done with a parser written in C may present a real boost. On another hand, the requests are now only run once periodically and cached inside the pooler. This means that the emission bitrate is always the same and that the reception bitrate grows as there are more downloads running. And as a side-effect there is less XML parsing done thus less CPU time used.
Might I ask what the license of eatmonkey is? I want to make a package for it on Source Mage GNU/Linux
ReplyDeleteThanks!
GPLv2 or later.
ReplyDeleteGood you point this out, as I forgot to include the right COPYING, the release tarball probably contains the GPLv3 version which is wrong.
Is this project still alive?
ReplyDeleteYes and no, I had like to push a frontend based on aria2 forward, although I don't have time for that.
ReplyDelete