2010-11-28

SPAM-ips.rb

I'm sharing a small script that allows to scan IPs against Whois and GeoIP databases. It allows to quickly retrieve the geolocation of the IPs and print statistics, so that you know from where the connections are originating from. The Whois information is stored inside text files named whois.xxx.yyy.zzz.bbb.

You can download the script here.

Example:
 • Usage
$ spam-ips.rb --help
Usage: /home/mike/.local/bin/spam-ips.rb ip|filename [[ip|filename] ...]

 • First we retrieve some IPs
$ awk '{print $6}' /var/log/httpd/access.log > /tmp/ip-list.txt

 • Now we run the script with the list of IPs inside the text file
$ cd /tmp
$ spam-ips.rb ip-list.txt
Scanning 18 IPs... done.
xxx.zzz.yyy.bbb GeoIP Country Edition: IP Address not found
xxx.zzz.yyy.bbb GeoIP Country Edition: BR, Brazil
xxx.zzz.yyy.bbb GeoIP Country Edition: AR, Argentina
xxx.zzz.yyy.bbb GeoIP Country Edition: SE, Sweden
xxx.zzz.yyy.bbb GeoIP Country Edition: CA, Canada
xxx.zzz.yyy.bbb GeoIP Country Edition: US, United States
xxx.zzz.yyy.bbb GeoIP Country Edition: DE, Germany
xxx.zzz.yyy.bbb GeoIP Country Edition: BE, Belgium
xxx.zzz.yyy.bbb GeoIP Country Edition: FR, France
xxx.zzz.yyy.bbb GeoIP Country Edition: NL, Netherlands
xxx.zzz.yyy.bbb GeoIP Country Edition: NO, Norway
xxx.zzz.yyy.bbb GeoIP Country Edition: FI, Finland
xxx.zzz.yyy.bbb GeoIP Country Edition: DE, Germany
xxx.zzz.yyy.bbb GeoIP Country Edition: FR, France
xxx.zzz.yyy.bbb GeoIP Country Edition: FR, France
xxx.zzz.yyy.bbb GeoIP Country Edition: DE, Germany
xxx.zzz.yyy.bbb GeoIP Country Edition: RU, Russian Federation
xxx.zzz.yyy.bbb GeoIP Country Edition: RU, Russian Federation
3       FR, France
3       DE, Germany
2       RU, Russian Federation
1       US, United States
1       NL, Netherlands
1       IP Address not found
1       NO, Norway
1       FI, Finland
1       SE, Sweden
1       CA, Canada
1       BR, Brazil
1       BE, Belgium
1       AR, Argentina
Total: 18

I wrote this script when I noticed Wiki SPAM and concluded that SPAM originated from a single Bot master but of course I was unable to figure out which one. The script can still be useful from times to times.

2010-10-23

XTerm as root-tail

The idea behind this title is to use XTerm as a log viewer over the desktop, just like root-tail works. The tool root-tail paints text on the root window by default or any other XWindow when used with the -id parameter.

Using XTerm comes with little advantage, it is possible to scroll into the “backlog” and make text selections. On a downside, it won't let you click through into the desktop, therefore it is rather useful for people without desktop icons for example.

We will proceed with a first simple example, by writing a Shell script that will use the combo DevilsPie and XTerm. The terminals will all be kept in the background below other windows and never take the focus thanks to DevilsPie. DevilsPie is a tool watching the creation of new windows and applies special rules over them.

Obviously, you need to install the command line tool devilspie. It's a command to run in the background as a daemon. Configuration files with a .ds extensions contain matches for windows and rules that are put within the ~/.devilspie directory.

First example

The first example shows how to match only one specific XTerm window.

The DevilsPie configuration:
DesktopLog.ds
(if
 (is (window_class) "DesktopLog")
 (begin
  (wintype "dock")
  (geometry "+20+45")
  (below)
  (undecorate)
  (skip_pager)
  (opacity 80)
 )
)
The Shell script making sure devilspie is running, and spawning a single xterm process:
desktop-log.sh
#!/bin/sh
test `pidof devilspie` || devilspie &
xterm -geometry 164x73 -uc -class DesktopLog -T daemon.log -e sudo tail -f /var/log/daemon.log &
NB: You can notice the size of the XTerm window is set through the Shell script while the position is set through the DevilsPie rules file, and there is a simple reason for this. By default XTerm has a size of 80 columns and 24 lines and text with too long lines will be wrapped on the next line. If afterwards you resize the window the wrapped text won't move up and the result will be ugly. Therefore it's better to set the initial size of the terminal correctly.

To try the example, save the DevilsPie snippet inside the directory ~/.devilspie, and download and execute the Shell script. Make sure to quit any previous DevilsPie process whenever you modify or install a new .ds file.


Second example

The second example is a little more complete, it starts three terminals of which one is coloured in black.
DesktopLog.ds
(if
 (matches (window_class) "DesktopLog[0-9]+")
 (begin
  (wintype "dock")
  (below)
  (undecorate)
  (skip_pager)
  (opacity 80)
 )
)
 
(if
 (is (window_class) "DesktopLog1")
 (geometry "+480+20")
)
 
(if
 (is (window_class) "DesktopLog2")
 (geometry "+20+20")
)
 
(if
 (is (window_class) "DesktopLog3")
 (geometry "+20+330")
)
desktop-log.sh
#!/bin/sh
test `pidof devilspie` || devilspie &
xterm -geometry 88x40 -uc -class DesktopLog1 -T daemon.log -e sudo -s tail -f /var/log/daemon.log &
xterm -geometry 70x20 -uc -class DesktopLog2 -T auth.log -e sudo -s tail -f /var/log/auth.log &
xterm -fg grey -bg black -geometry 70x16 -uc -class DesktopLog3 -T pacman.log -e sudo -s tail -f /var/log/pacman.log &


NB: You will probably notice that setting the geometry is awkward, specially since position and size are in two different files, getting it right needs several tweakings.

This blog post was cross-posted to the Xfce Wiki.

2010-09-19

CLI tool to review PO files

If there is something annoying about reviewing PO files is that it is impossible. When there are two hundred messages in a PO file, how are you going to know which messages changed? Well, that's the way it works currently for Transifex but there are very good news, first a review board is already available which is a good step forward but second it is going to get some good kick to make it awesome. But until this happens, I have written two scripts to make such a review.

A shell script msgdiff.sh

Pros: tools available on every system
Cons: ugly output, needs template file

#!/bin/sh
PO_ORIG=$1
PO_REVIEW=$2
PO_TEMPL=$3

MSGMERGE=msgmerge
DIFF=diff
PAGER=more
RM=/bin/rm
MKTEMP=mktemp

# Usage
if test "$1" = "" -o "$2" = "" -o "$3" = ""; then
    echo Usage: $0 orig.po review.po template.pot
    exit 1
fi

# Merge
TMP_ORIG=`$MKTEMP po-orig.XXX`
TMP_REVIEW=`$MKTEMP po-review.XXX`
$MSGMERGE $PO_ORIG $PO_TEMPL > $TMP_ORIG 2> /dev/null
$MSGMERGE $PO_REVIEW $PO_TEMPL > $TMP_REVIEW 2> /dev/null

# Diff
$DIFF -u $TMP_ORIG $TMP_REVIEW | $PAGER

# Clean up files
$RM $TMP_ORIG $TMP_REVIEW

Example:
$ ./msgdiff.sh fr.po fr.review.po thunar.pot
[...]
 #: ../thunar-vcs-plugin/tvp-git-action.c:265
-#, fuzzy
 msgid "Menu|Bisect"
-msgstr "Différences détaillées"
+msgstr "Menu|Couper en deux"
 
 #: ../thunar-vcs-plugin/tvp-git-action.c:265
 msgid "Bisect"
-msgstr ""
+msgstr "Couper en deux"
[...]

A Python script podiff.py

Pros: programmable output
Cons: external dependency

The script depends on polib that can be installed with the setuptools scripts. Make sure setuptools is installed and than run the command sudo easy_install polib.

#!/usr/bin/env python
import polib

def podiff(path_po_orig, path_po_review):
    po_orig = polib.pofile(path_po_orig)
    po_review = polib.pofile(path_po_review)
    po_diff = polib.POFile()
    po_diff.header = "PO Diff Header"
    for entry in po_review:
        orig_entry = po_orig.find(entry.msgid)
        if not entry.obsolete and (orig_entry.msgstr != entry.msgstr \
        or ("fuzzy" in orig_entry.flags) != ("fuzzy" in entry.flags)):
            po_diff.append(entry)
    return po_diff


if __name__ == "__main__":
    import sys
    import os.path

    # Usage
    if len(sys.argv) != 3 \
      or not os.path.isfile(sys.argv[1]) \
      or not os.path.isfile(sys.argv[2]):
        print "Usage: %s orig.po review.po" % sys.argv[0]
        sys.exit(1)

    # Retrieve diff
    path_po_orig = sys.argv[1]
    path_po_review = sys.argv[2]
    po_diff = podiff(path_po_orig, path_po_review)

    # Print out orig v. review messages
    po = polib.pofile(path_po_orig)
    for entry in po_diff:
        orig_entry = po.find(entry.msgid)
        orig_fuzzy = review_fuzzy = "fuzzy"
        if "fuzzy" not in orig_entry.flags:
            orig_fuzzy = "not fuzzy"
        if "fuzzy" not in entry.flags:
            review_fuzzy = "not fuzzy"
        print "'%s' was %s is %s\n\tOriginal => '%s'\n\tReviewed => '%s'\n" % (entry.msgid, orig_fuzzy, review_fuzzy, orig_entry.msgstr, entry.msgstr)

Example:
$ ./podiff.py fr.po fr.review.po
'Menu|Bisect' was fuzzy is not fuzzy
 Original => 'Différences détaillées'
 Reviewed => 'Menu|Couper en deux'

'Bisect' was not fuzzy is not fuzzy
 Original => ''
 Reviewed => 'Couper en deux'
[...]

2010-09-06

Benchmarking Compression Tools

Comparison of several compression tools: lzop, gzip, bzip2, 7zip, and xz.
  • Lzop: small and very fast yet good compression.
  • Gzip: fast and good compression.
  • Bzip2: slow for both compression and decompression although very good compression.
  • 7-Zip: LZMA algorithm, slower than Bzip2 for compression but very good compression.
  • Xz: LZMA2, evolution of LZMA algorithm.

Preparation

  • Be skeptic about compression tools and wanna promote the compression tool
  • Compare quickly old and new compression tools and find interesting results
So much for the spirit, what you really need is to write a script (Bash, Ruby, Perl, anything will do) because you will want to generate the benchmark data automatically. I picked up Ruby as it's nowadays the language of my choice when it comes to any kind of Shell-like scripts. By choosing Ruby I have a large panel of classes to process benchmarking data, for instance I have a Benchmark class (wonderful), I have a CSV class (awfully documented, redundant), and I have a zillion of Gems for any kind of tasks I would need to do (although I always avoid them).

I first focused on retrieving the data I was interested into (memory, cpu time and file size) and saving it in the CSV format. By doing so I am able to produce charts easily with existing applications, and I was thinking maybe it was possible to use GoogleCL to generate charts from the command line with Google Docs but it isn't supported (maybe it will maybe it won't, it's up to gdata-python-client). However there is an actual Google tool to generate charts, it is the Google Chart API that works by providing a URI to get an image. The Google Image Chart Editor website helps you to generate the chart you want in a friendly WYSIWYG mode, after that it is just a matter of computing the data into shape for the URI. But well while focusing on the charts I found the Ruby Gem googlecharts that makes it friendly to pass the data and save the image.

Ruby Script

The Ruby script needs the following:
  • It was written with Ruby 1.9
  • Linux/Procfs for reading the status of processes
  • Googlecharts: gem install googlecharts
  • ImageMagick for the command line tool convert (optional)
The Ruby script takes a path as argument, with which it creates a tarball inside a tmpfs directory in order to avoid I/O latencies from a hard-drive. Next it runs a number of commands over the tarball from which it collects benchmark data. The benchmark data is then saved inside CSV files that are reusable within spreadsheet applications. The data is also reused to retrieve charts from the Google Chart API and finally it calls the ImageMagick tool ''convert'' to collect the charts inside a single image. The summary displayed on the standard output is also saved inside a text file.

The script is a bit long for being pasted here (more or less 300 lines) so you can download it from my workstation. If the link doesn't work make sure the web browser doesn't encode ~ (f.e. to "%257E"), I've seen this happening with Safari (inside my logs)! If really you are out of luck, it is available on Pastebin.

Benchmarks

The benchmarks are available for three kinds of data. Compressed media files, raw media files (image and sound, remember that the compression is lossless), and text files from an open source project.
Media Files
Does it make sense at all to compress already compressed data. Obviously not! Let's take a look at what happens anyway.


As you see, compression tools with focus on speed don't fail, they still do the job quick while gaining a few hundred kilo bytes. However the other tools simply waste a lot of time for no gain at all.

So always make sure to use a backup application without compression over media files or the CPU will be heating up for nothing.
Raw Media Files
Will it make sense to compress raw data? Not really. Here are the results:


There is some gain in the order of mega bytes now, but the process is still the same and for that reason it is simply unadapted. For media files there are existing formats that will compress the data lossless with a higher ratio and a lot faster.

Let's compare lossless compression of a sound file. The initial WAV source file has a size of 44MB and lasts 4m20s. Compressing this file with xz takes about 90s, this is very long while it reduced the size to 36MB. Now if you choose the format FLAC, which is doing lossless compression for audio, you will have a record. The file is compressed in about 5s to a size of 24MB! The good thing about FLAC is that media players will decode it without any CPU cost.

The same happens with images, but I lack knowledge about photo formats so your mileage may vary. Anyway, except the Windows bitmap format, I'm not able to say that you will find images uncompressed just like you won't find videos uncompressed... TIFF or RAW is the format provided by many reflex cameras, it has lossless compression capabilities and contains many information about image colors and so on, this makes it the perfect format for photographers as the photo itself doesn't contain any modifications. You can also choose the PNG format but only for simple images.
Text Files
We get to the point where we can compare interesting results. Here we are compressing data that is the most commonly distributed over the Internet.


Lzop and Gzip perform fast and have a good ratio. Bzip2 has a better ratio, and both LZMA and LZMA2 algorithms even better. We can use an initial archive of 10MB, 100MB, or 400MB, the charts will always look alike the one above. When choosing a compression format it will either be good compression or speed, but it will definitely never ever be both, you must choose between this two constraints.

Conclusion

I never heard about the LZO format until I wanted to write this blog post. It looks like a good choice for end-devices where CPU cost is crucial. The compression will always be extremely fast, even for giga bytes of data, with a fairly good ratio. While Gzip is the most distributed compression format, it works just like Lzop, by focusing by default on speed with good compression. But it can't beat Lzop in speed, even when compressing in level 1 it will be fairly slower in matter of seconds, although it still beats it in the final size. When compressing with Lzop in level 9, the speed is getting ridiculously slow and the final size doesn't beat Gzip with its default level where Gzip is doing the job faster anyway.

Bzip2 is noise between LZMA and Gzip. It is very distributed as default format nowadays because it beats Gzip in term of compression ratio. It is of course slower for compression, but easily spottable is the decompression time, it is the worst amongst all in all cases.

Both LZMA and LZMA2 perform almost with an identical behavior. They are using dynamic memory allocation, unlike the other formats, where the higher the input data the more the memory is allocated. We can see the evolution of LZMA is using less memory but has on the other hand a higher cost on CPU time. And we can see they have excellent decompression time, although Lzop and Gzip have the best scores but then again there can't be excellent compression ratio and compression time. The difference between the compression ratio of the two formats is in the order of hundred of kilo bytes, well after all it is an evolution and not a revolution.

On a last note, I ran the benchmarks on an Intel Atom N270 that has two cores at 1.6GHz but I made sure to run the compression tools with only one core.

A few interesting links:

2010-07-15

Don't produce Gzipped tarballs

A quick note so I can delete it from my desktop. In order to produce only a Bzip2 tarball with the Autotools, specially when running make distcheck, set the automake init call with these parameters:

AM_INIT_AUTOMAKE([no-dist-gzip dist-bzip2])

By the way I wonder if it's worth dumping bzip2 against xz.

2010-06-14

Major changes in the Xfce Task Manager going 1.0

It's done. The task manager application available in Xfce for quite some years is now available with major changes. It has been rewritten from scratch, with GtkBuilder UI definitions and GObjects, everything is fresh and clean. The application has support for Linux, OpenBSD, FreeBSD and OpenSolaris.

Let's start by visual changes:
  • The buttons at the bottom are gone, the progress bars at the top are vanished, say hello to a toolbar with buttons and monitors.
  • You read well, monitors are in, they show a graph of the CPU and memory usage by time.
  • A status bar is visible at the bottom, it displays a general information about the system usage.
  • Icons are displayed beneath the task names.

Let's continue with less visual:
  • Tasks that start are displayed with a green background for a short delay and tasks that terminate with a red background.
  • Tasks which state is changing are temporarily displayed with a yellow background. This covers tasks changing their state from idle to running, vice versa and etc.
  • The tree views context menu contains the same actions as before, sending signals to the task and changing the priority. They have been polished however, for example the continue and stop signals aren't shown altogether anymore, and there are only five priorities to set ranging from Very low to Very high.
  • The tree view columns can be reordered as you wish.
  • An optional status icon can be activated allowing you to hide the application.
  • It is possible to display percentage values with more precision.
  • And finally, the default refresh rate is 750ms and it can be switched from 500ms up to 10s.

And the result is as follows:


The application is fully translated into fifteen languages!

Go to project webpage.

2010-05-30

Meego installation on a USB stick

This post wouldn't be if the hard-drive from my Acer Aspire One didn't die. I have a fresh backup of the disk (a full 'dd' plus a separate one for just the home partition) so if I need something back, and I know I don't I care but backups are important, I can always mount it in a loopback and copy files.

The hard-drive is actually, what I want to call it, a cheap and fake SSD. It's a PATA SSD that I'm sure I will never find a replacement for. Look:


Now the dilemma was easy, or I threw the netbook to trash, or I found something to boot on. I started to look for a solution and Meego just got released, this is crazy timing. I downloaded the boot image and tried it out, and guess what, it is getting better and better. It's definitely more beautiful, it is getting faster, it has better dialogs for customization, well just try it out if you didn't yet, you wont be disappointed but surprised.

So in the end, installing a system on a USB stick is the only solution I can come up with. I ordered an extra USB stick, but mini please, a Kingston DTmini10! Now when I tell people this is my actual hard-drive, they are like “say-whaaat.”

The installation of Meego didn't went that fluently. I have two USB sticks, one with the boot image, another serving as target device for installation. The installation worked fine without any modification, it boots but ends on a black screen with the CAPS del blinking. Boo, kernel panic, or something else ungroovy. I also tried an installation with the file-system ext3, the default is btrfs, but then the grub installer fails and the Meego installer is knocked out in a waiting sequence. So I did a default installation again, sigh. After a search I tried out some parameters for the kernel command line and adding “rootdelay=8” did the trick. In fact, the USB stick boots without problem, but past that there is some delay for the kernel to discover the USB device, you can then see the following message:

   sd 11:0:0:0: [sdx] Assuming drive cache: write through

If there is no rootdelay parameter there is no root device found, and booting just ain't gonna work out. End of story. There are some tiny tweaks to be done afterwards. The kernel command line must point to the right root device, just like for the fstab file. The kernel command line can be edited in the file /boot/extlinux/extlinux.conf. Everything else works out just fine. Booting time, except the rootdelay, is acceptable, but shutting down seems to be endless, and precisely when I want the netbook to turn off I want it to be really fast. I'm going to send it to sleep more often than usual, by closing and opening the lid, which is the fastest “boot” sequence one can get ;-)


Update: I've been wrong stating the shutdown process is taking ages, I just did a shutdown and this time it went quick, so something must have been be unlucky and the disk synced something around and around.

2010-04-30

Moblin blazing fast

I updated my netbook to give it a new look. I switched the Xfce Panel against bmpanel2 and changed the background (the previous definitelly lasted very long.) Not much changes, but I topped a cold boot of about six seconds, always faster baby :-P And the window manager is OpenBox by the way.


The only real useful entry missing in this panel is a battery monitor. At least I have an indicator over the keyboard that starts blinking when there is about three percents left. What I like about this panel is the cool themes that it is provided with, however the configuration is set through a hand-written configuration file which sucks but what do you want, it is extremely lightweight on the other hand.

Update: Should I mention I totally forgot about the Xfce power manager? Well I did, and it is provided with a notification icon displaying the battery status :-) However I had to fix the default ACPI script related to the lid, since HAL doesn't list it, in order to get the netbook to go into sleep.

2010-04-04

VLC with GTK+ look-n-feel

To get Qt applications to look like GTK+ applications run qtconfig and in Select GUI Style choose GTK+. Next click in the menu bar File > Save.

Something is still puzzling me, why does GNOME run VLC automatically with native GTK+ look-n-feel and not Xfce?

Update: Thanks to the power of tig and grep, I figured the Qt library (qt_init function) defines the desktop environment as GNOME for Xfce (this results in GTK+ theming, GNOME like Open dialogues, etc) by retrieving an X11 atom on the root window and compares it to “xfce4” but it seems that this doesn't work nowadays (at least it didn't work within an Xfce 4.7 desktop session). I'm looking forward for sending patches.

Update2: The latest version Qt 4.6.2 doesn't include the code for checking the X11 atom (it's in git), which explains why it doesn't work.

2010-03-27

New notes plugin release 1.7.3

Three months since the last release, and three months since it is available as a separate standalone application running in the notification area. This has made it a lot easier to test and debug, as before I had to build the plugin, install the plugin, restart the panel or remove/readd the plugin in the panel, now I just have to run ./xfce4-notes from the source directory.

This new release has seen some structural tree changes to save time during compilation. Now everything is in src/ and lib/, where lib/ contains code to build an XnpHypertextView, an XnpNote (a composite-widget that embeds a GtkScrolledWindow with an XnpHypertextView and sends “save” signals on changes), an XnpWindow with the custom made navigation and title bars and the right click menu on the title bar, and finally an XnpApplication class that is the heart of everything, it handles creations/deletions of notes, loads/saves the data, etc. The src/ directory contains the main files for the panel plugin, the status icon, the popup command and the settings dialogue.

The new stuff is mostly eye-candy as stated in the previous blog entry. The GTK+ RC style has been pimped up with custom made scrollbars and the source code contains a self-drawn close button. The stuff about GTK+ scrollbars theming is grossly explained on live.gnome.org but I opened the GTK+ Dust theme files which was, to me, more understandable :-) Also it was because of this particular theme I took a look at customizing the scrollbars, see below the before/after screenshots. The older article about writing a Widget with Cairo helped me getting started from scratch with an empty “close button” widget to replace the simple GtkButton with label. As I liked very much the time passed on these changes I contributed a tutorial “Monochrome icon” available only in PDF as of today which I hope to be useful for Vala beginners but also a nice update of the article about Cairo but with Vala language.


The fixes included in this release are the following: correctly restore sticky-window and keep-above states after some race conditions, and restore tab label orientation after renaming a note. And last but definitely not least the undo feature was not working because an internal timeout wasn't reset to zero which made the code think a snapshot was needed and thus the undo/redo buffers ended with the same content after the timeout elapsed. Thanks to Christian (the developer behind Midori) otherwise I would still not have taken a look around this!

The forthcoming features I have in mind would be a search dialogue and per-note options for activating a stripped down “markdown” syntax, an orthographic corrector and wrapping words which is the default for the moment.

The release is available at archive.xfce.org.

Thanks for the feedbacks and reports you sent and will send back.

Update: The tutorial is now also available on the Xfce wiki.

2010-03-08

Include custom GTK+ RC style

I've been using a custom GTK+ RC style for the notes plugin since the version 1.4.0, right now it is at version 1.7.2. I have been playing with GTK+ theming again these last two hours, and I've get custom scrollbars, a gradient for the custom-made “title bar”, and better colours for the notebook to get the current tab stand out from the crowd.

While experimenting on a test-case code I found out a better way to parse a gtkrc file in the program. The first time I was fighting with the existing gtk_rc related functions, I gave up on a solution I partially dislike that is to include a line to the custom gtkrc file within ~/.gtkrc-2.0.

Today I understood how gtk_rc_parse(filename) behaves. You have to call this function at the beginning of the program before building any widgets, it will work even if the file doesn't exist yet. Next, while the program is running, you can modify the file, create it, delete it, truncate it, whatever, and call gtk_rc_reparse_all() to get the style refreshed in the GUI. It's hard to believe that such easy things are sometimes a PITA :-)

Be prepared for a 1.7.3 notes plugin with nicer colours.

2010-03-01

Show/hide functionality from notification area

When using a status icon within the notification area it is common to use the left-click action to show/hide the main window. Obviously this is often done in different ways. So here is my tip on how to do it right :-)

What I believe to me the most sense-full way is to:
  1. Check if the application is invisible and show it,
  2. Otherwise check if the window is inactive and present it,
  3. Otherwise hide it.
In C language it looks like this:
/* Show the window */
if (!(GTK_WIDGET_VISIBLE(window))) {
    gtk_widget_show(window);
}
/* Present the window */
else if (!gtk_window_is_active(GTK_WINDOW(window))) {
    gtk_window_present(GTK_WINDOW(window));
}
/* Hide the window */
else {
    int winx, winy;
    gtk_window_get_position(GTK_WINDOW(window), &winx, &winy);
    gtk_widget_hide(window);
    gtk_window_move(GTK_WINDOW(window), winx, winy);
}
I have been doing this for quite a long time inside the Xfce Notes plugin, except a little different with multiple windows.

Some remarks, the PendingSealings proposes gtk_widget_get_visible instead of its analogous MACRO. And as you may also notice when the window is hidden it gets moved just after, this is important as otherwise the window would be repositioned by its initial value once shown again (e.g. centre of screen or dynamically by the window manager).

2010-02-13

Eatmonkey 0.1.3 benchmarking

Eatmonkey has now been released for the 4th time and I started to use it to download videos from FOSDEM2010 by drag-n-dropping the links from the web page to the manager :-)

I downloaded four files and while they were running I had a close look at top and iftop to monitor the CPU usage and the bandwidth usage between the client/server (the connection between eatmonkey and the aria2 XML-RPC server running on the localhost interface).

I had unexpected results and was surprised by the CPU usage. It is very high currently which means I have a new task for the next milestone, getting the CPU footprint low. The bandwidth comes without surprises, but since the milestone will target performance where possible I will fine down the number of requests made to the server. This problem is also noticeable in the GUI in that it tends to micro-freeze during the updates of each download. So the more active downloads will be running the more the client will be freezing.

Some results as it will speak more than words:
Number of active downloadsReceptionEmissionCPU%
4 downloads144Kbps18Kbps30%
3 downloads108Kbps14Kbps26%
2 downloads73Kbps11Kbps18%

I will start by running benchmarks on the code itself, and thanks to Ruby there is built-in support for Benchmarking and Profiling. It comes with at least three different useful modules: benchmark, profile and profiler. The first measures the time that the code necessitated to be executed on the system. It is useful to measure different kind of loops like for, while or do...while, or for example to see if a string is best to be compared through a dummy compare function or via a compiled regular expression. The second simply needs to be included at the top of a Ruby script and it will print a summary of the time passed within each method/function call. The third does the same except it is possible to run the Profiler around distinctive blocks of code. So much for the presentation, below are some samples.

File benchmark.rb:
#!/usr/bin/ruby -w

require "benchmark"
require "pp"

integers = (1..10000).to_a
pp Benchmark.measure { integers.map { |i| i * i } }

Benchmark.bm(10) do |b|
    b.report("simple") { 50000.times { 1 + 2 } }
    b.report("complex") { 50000.times { 1 + 2 - 6 + 5 * 4 / 2 + 4 } }
    b.report("stupid") { 50000.times { "1".to_i + "3".to_i * "4".to_i - "2".to_i } }
end

words = IO.readlines("/usr/share/dict/words")
Benchmark.bm(10) do |b|
    b.report("include") { words.each { |w| next if w.include?("abe") } }
    b.report("regexp") { words.each { |w| next if w =~ /abe/ } }
end

File profile.rb:
#!/usr/bin/ruby -w

require "profile"

def factorial(n)
    n > 1 ? n * factorial(n - 1) : 1;
end

factorial(627)

File profiler.rb:
#!/usr/bin/ruby -w

require "profiler"

def factorial(n)
    (2..n).to_a.inject(1) { |product, i| product * i }
end

Profiler__.start_profile
factorial(627)
Profiler__.stop_profile
Profiler__.print_profile($stdout)
Update: The profiling showed that during a status request 65% of the time is consumed by the XML parser. The REXML class is written 100% in Ruby, and that gives a good hint that the same request done with a parser written in C may present a real boost. On another hand, the requests are now only run once periodically and cached inside the pooler. This means that the emission bitrate is always the same and that the reception bitrate grows as there are more downloads running. And as a side-effect there is less XML parsing done thus less CPU time used.

2010-02-06

Backward compatibility for Ruby 1.8

As I'm currently writing some Ruby code and that I started with version 1.9 I felt onto cases where some methods don't exist for Ruby 1.8. This is very annoying and I started by switching the code to 1.8 method calls. I disliked this when it came to Process.spawn which is a one line call to execute a separate process. Rewriting it takes around 5 lines instead.

So I had the idea to reuse something I already saw once. I write a new file named compat18.rb and include it within the sources that need it. Ruby makes it very easy to add new methods to existing classes/modules anyway, even if they exist already, so I just did it and it works like a charm.

Here is a small snippet:
class Array
        def find_index(idx)
                index(idx)
        end
end

class Dir
        def exists?(path)
                File.directory?(path)
        end
end

Update: It can happen that a fallback method from Ruby 1.8 has been totally dropped and replaced against a new method in 1.9, and in this case the older method has to be checked if it exists, and otherwise make a call to the parent.
class Array
        def count
                if defined? nitems
                        return nitems
                else
                        return super
                end
        end
end

2010-02-04

Fed up with Moblin

I slowly begin to be fed up with Moblin, the base installation. The base system starts way too often with core-dumps (crash on mutter f.e. which also means X restarts), but mainly because of RPM. When package-kit starts to check for an update — or when you do any installation/upgrade with yum e.g. you use rpm directly or indirectly — the whole system goes unusable, the browser acts like it is frozen, it takes very long to switch between tasks, and all of this for at least a minute up to an hour if you accept to run an update. You can call this whatever you want, I call this a big fail.

This happens on an Acer Aspire One 9", where I guess they installed the cheapest SSD out there.

In fact things were getting really bad when I switched to an Xfce session, I received unbelievable long startup times. Uxlaunch, the new automatic login application on Moblin 2.1, is totally uncooperative. The Xfce session ends launching many tools and applications twice, two corewatcher-applets, two connman-applets, etc. Uxlaunch will run xfce4-session, but also executes the same desktop files — as it seems after a quick look in the code — from the autostart directory, which is a role taken by the Xfce session manager.

So I have been looking around to finally throw away some junk.

Now I have been looking close at the autostart applications since the "all-in-twice" fiasco to get this netbook fast again. Of course you have to know what you do, this kind of tasks isn't open to people without technical skills. First I changed the default "desktop" to Openbox, by downloading the RPM source package, compiling it and putting it inside the uxlaunch configuration file. Then I have been removing some base packages and manually hiding some desktop files to avoid them to autostart — I have been playing with the Hidden/NoDisplay key but it didn't have any effect on uxlaunch so it ended with a chmod 000 command.

I dropped four packages, kerneloops, corewatcher and obexd/openobex. I really don't want them around anymore. And I "dropped" seven autostart files, ofono which depends on a lot of applications, the bkl-orbiter, and the rest are Moblin panel related applications, bluetooth-panel which I don't even have on this netbook, carrick-panel as I use connman-applet which works at least for an automatic connection, two dalson applications dalson-power-applet and dalston-volume-applet, and at last moblin-panel-web.

I kept the gnome-settings-daemon although I have the Xfce settings daemon installed which I do prefer at some extends. And after all this I changed the GTK+ and icon themes through the gconf keys. And what's the conclusion? Moblin is nice, but I managed to munch it and enforce my desktop.

Update: After running under OpenBox I feel that my remark toward RPM is wrong, I don't know maybe it is the mixed use of OpenGL that makes the tasks taking ages to react. All in all, the default desktop environment is something where you must know about patience :-)

2010-01-24

The download manager is in the wild

So it's finally done, it took very long, but it's done. The download manager I once had in mind is taking off into the wildness :-) Of course it took long because I never did something with it, writing a front-end to wget/curl isn't interesting -- who cares about downloading HTTP/FTP files when the web browser handles it for you anyway -- and reusing GVFS doesn't make sense cause really you don't want to download from your trash:// or whatever proto:// and again only HTTP/FTP is not interesting. Not at all. I have come across Uget and other very good projects but most of them are either writing the code to handle the protocol like HTTP and/or are looking forward to handle more interesting protocols like BitTorrent. I think it's a very tough job that demands too much for a one-maintainer project. Recently I saw the new release of aria2 that comes with an XML-RPC interface and this took all my interest during 4 days. I believe this utility is very promising and I had really like to write the good and user-friendly XML-RPC GUI client that it seems to be missing!

What is so exciting about aria2? In case you know the project you don't have to read, but it is worth mentionning the features of this small utility. It supports HTTP(s)/FTP but also BitTorrent and Metalinks. It is widely customizable for each specific protocol. It can download one file by splitting it into several pieces and using multiple connections and even mix HTTP URIs with BitTorrent and by the same time upload to BitTorrent peers what has been downloaded through HTTP. So this has to be the perfect candidate to write a nice download manager, hasn't it?

The client is a very first version that I intended to code name draft although the release assistant on xfce.org doesn't allow this. Instead it will take the more neutral road of 0.1.0 to 0.1.1 etc until 0.2.0 followed by stable fix releases.

Why draft? Simple. It's being written with a higher level language than C but not even Vala :-) High-level languages are a great deal when starting a new application, as you can type more and get more, instead of typing like a dog for a rocking hot, well lousy, window. Since I do like Ruby, it's being written in Ruby currently, and it depends on the ruby-gnome2 project for the bindings. To get a picture, a main file to open a window takes 3 lines. Of course the final version is meant to be written in Vala/C, but I still need to convince myself that Vala+libsoup isn't an option that is going to waste too much time. Also at first glance libsoup looks easy to use, it allows to build XML-RPC requests, to request the HTTP bodies and to send messages, but it is not an XML-RPC client and you never know how well the Vala bindings will play. This means extra attention for small things. Starting an application from scratch with such constraints are usually a big time-killer therefore using like in this case an existing XML-RPC client is very important. The GUI is done with Glade in GtkBuilder format and reusing it into a new language will be pretty easy.


So what's next? I'll just wait for some feedback see what the audience thinks about it, if at all, and polish here and there. Keep tuned for the next update.