Feeds:
Posts
Comments

Posts Tagged ‘myprog’

A Generic Buffered Stream Wrapper

In C programming, the main difference between low-level I/O functions (open/close/read/write) and stream-level I/O functions (fopen/fclose/fread/fwrite) is that stream-level functions are buffered. Presumably, low-level I/O functions will incur a disk operation on each read(). Although the kernel may cache this, we cannot rely too much on it. Disk operations are expensive and so low-level I/O does not provide fgetc equivalent.

Stream-level I/O functions have a buffer. On reading, they load a block of data from disk to memory. If at a fgetc() call the data have been retrieved to memory, it will not incur a disk operation, which greatly improves the efficiency.

Stream-level I/O functions are part of the standard C library. Why do we need a new wrapper? Three reasons. First, when you work with an alternative I/O library (such as zlib or libbzip2) which do not come with buffered I/O routines, you probably need a buffered wrapper to make your code efficient. Second, using a generic wrapper makes your code more flexible when you want to change the type of input stream. For example, you may want to write a parser that works on a normal stream, a zlib-compressed stream and on a C string. Using a unified stream wrapper will simplify coding. Third, my feeling is most of steam-level I/O functions in stdio.h are not conventient given that they cannot enlarge a string automatically. In a lot of cases, I need to read one line but I do not know how long a line can be. Managing this case is not so hard, but doing this again and again is boring.

In the end, I come up with my own buffered wrapper for input streams. It is generic in that it works on all types of I/O steams with a read() call (or equivalent), or even on a C string. I show an example here without much explanation. I may expand this post in future. Source codes can be found in my programs page.

#include <fcntl.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include "kstream.h"
// arguments: type of the stream handler,
//   function to read a block, size of the buffer
KSTREAM_INIT(int, read, 10)

int main()
{
	int fd;
	kstream_t *ks;
	kstring_t str;
	bzero(&str, sizeof(kstring_t));
	fd = open("kstream.h", O_RDONLY);
	ks = ks_init(fd);
	while (ks_getuntil(ks, '\n', &str, 0) >= 0)
		printf("%s\n", str.s);
	ks_destroy(ks);
	free(str.s);
	close(fd);
	return 0;
}

Read Full Post »

This is a follow-up of my previous post. Here I change the table to several charts. Hope it seems more friendly to readers. You can find the links to these libraries in that table. Their source codes, including my testing code, are available here. You may also want to see my previous posts in the last few days for my interpretation to the results.

On C string (char*) keys, I fail to use JE_rb_old and JE_rb_new to get the correct result on Mac and so they are not showed in the charts. I would really appreciate if someone may give me the correct implementation using these libraries. In addition, tr1_unordered_map uses a lot of memory according to my program. The memory for string keys are faked.

For conveniece, here are some brief descriptions of these libraries (with no order):

  • google_dense and google_sparse: google’s sparsehash library. Google_dense is fast but memory hungery while google_sparse is the opposite.
  • sgi_hash_map and sgi_map: SGI’s STL that comes with g++-4. The backend of sgi_map is a three-pointer red-black tree.
  • tr1::unordered_map: GCC’s TR1 library that comes with g++-4. It implements a hash table.
  • rdestl::hash_map: from RDESTL, another implementation of STL.
  • uthash: a hash library in C
  • JG_btree: John-Mark Gurney’s btree library.
  • JE_rb_new, JE_rb_old, JE_trp_hash and JE_trp_prng: Jason Evans’ binary search tree libraries. JE_rb_new implements a left-leaning red-black tree; JE_rb_old a three-pointer red-black tree; both JE_trp_hash and JE_trp_prng implement treaps but with different strategies on randomness.
  • libavl_rb, libavl_prb, libavl_avl and libavl_bst: from GNU libavl. They implment a two-pointer red-black tree, a three-pointer red-black tree, an AVL tree and a unbalanced binary search tree, respectively.
  • NP_rbtree and NP_splaytree: Niels Provos’ tree library for FreeBSD. A three-pointer red-black tree and a splay tree.
  • TN_rbtree: Thomas Niemann’s red-black tree. I ported it to C++.
  • sglib_rbtree: from SGLIB. It implements a two-pointer recursive red-black tree (all the other binary search trees are implemented without recursion).
  • libavl_avl_cpp, libavl_rb_cpp and libavl_rb_cpp2: incomplete C++ version of libavl (no iterator), ported by me. Libavl_rb_cpp2 further uses the same technique in JE_rb_new to save the color bit. Source codes available in the package.
  • khash and kbtree: my hash table and B-tree implementation. kbtree is based on JG_rbtree.

Read Full Post »

Over the weekend, I have done a more comprehensive benchmark of various libraries on search trees. Two AVL, seven red-black tree, one Splay tree, two treap implementations are involved, together with seven hash table libraries. As I need to present a big table, I have to write it in a free-style HTML page. You can find the complete benchmark here and all the source codes here. I only copy the “concluding remarks” in the benchmark page as follows:

  • Hash table is preferred over search trees if we do not require order.
  • In applications similar to my example, B-tree is better than most of binary search trees in terms of both speed and memory.
  • AVL tree and red-black tree are the best general-purposed BSTs. They are very close in efficiency.
  • For pure C libraries, using macros is usually more efficient than using void* to achieve generic programming.

You can find the result and much more discussions in that page. If you think the source codes or the design of benchmark can be improved, please leave comments here or send me E-mail. In addition, I failed to use several libraries and so you can see some blank in the table. I would also appreciate if someone could show me how to use those libraries correctly.

Read Full Post »

B-tree vs. Binary Search Tree

When talking about in-memory search tree, we usually think of various binary search trees: red-black tree, AVL tree, treap, splay tree and so on. We do not often think of B-tree, as B-tree is commonly introduced as an on-disk data structure rather than in-memory one. Is B-tree also a good data structure for in-memory ordered dictionary? I used to search for the performance comparison between B-tree and binary search trees, but ended up with nothing useful. It seems that only I am interested in such comparison and so I have to do it by myself.

I found John-Mark Gurney’s B-tree via google search. It is well coded and full of clever ideas. The original version has small memory footprint, but it is not as fast as STL’s red-black tree. I studied this source codes and think I should be able to further optimize it. In the end, I got my kbtree.h macro library. As you can see in my hash table benchmark, the modified version beats STL set while using even smaller memory than the original version. I think I am now at the position to say: at least for some applications, B-tree is a better ordered data structure than most of binary search trees.

The most attractive feature of B-tree is its small memory usage. A binary tree needs at least two pointers for each record, which amounts to 16N on a modern 64-bit systems. A B-tree only needs one pointer. Although in a B-tree each node may not be full, a sufficiently large B-tree should be at least 50% full by definition and in average around 75% full. On a 64-bit system, the extra memory is only 8N/0.75+KN(1/0.75-1)=(10+0.3K)N, where K is the size of a key. In fact we can do even better as we do not need to allocate the null pointers in leaves. The practical memory overhead can be reduced to below (5+0.3K)N (in fact, as the majority of nodes in a B-tree are leaves, the factor 5 should be smaller in practice), far better than a binary search tree. On speed, no binary search tree with just two additional pointers (splay tree and hash treap) can achieve the best performance. We usually need additional information at each node (AVL tree and standard red-black tree) or a random number (treap) to get good performance. B-tree is different. It is even faster than the standard red-black tree while still using (5+0.3K)N extra memory! People should definitely pay more attention to B-tree.

Update: The modified B-tree is available here (HTML) as a single C header file. Example is here. Currently, the APIs are not very friendly but are ready to use. In case you want to give a try. Note that you must make sure the key is absent in the tree before kb_put() and make sure the key is present in the tree before calling kb_del().

Someone has corrected me. STL is a specification, not an implementation. By STL in my blog, I always mean SGI STL, the default STL that comes with GCC.

Over the weekend I have done a more complete benchmark of various libraries on search trees and hash tables. Please read this post if you are interested.

I realize that a lot of red-black tree implementations do not need a parent pointer, although SGI STL’s one uses. My comment below is somewhat wrong.

Update 2: kbtree.h has long been moved here, along with khash.h and my other 1- or 2-file libraries.

Read Full Post »

I do not see much need to have a vector container in C as a vector is simply an array and array operations are all very simple. Nontheless, it might still better to implement one, for the sake of completeness. Here is the code. The library is almost as fast as the fastest code you can write in C.

#ifndef AC_KVEC_H
#define AC_KVEC_H

#include
#include

#define kv_roundup32(x) (–(x), (x)|=(x)>>1, (x)|=(x)>>2, (x)|=(x)>>4, (x)|=(x)>>8, (x)|=(x)>>16, ++(x))

#define kvec_t(type) struct { uint32_t n, m; type *a; }
#define kv_init(v) ((v).n = (v).m = 0, (v).a = 0)
#define kv_destroy(v) free((v).a)
#define kv_A(v, i) ((v).a[(i)])
#define kv_pop(v) ((v).a[–(v).n])
#define kv_size(v) ((v).n)
#define kv_max(v) ((v).m)

#define kv_resize(type, v, s) ((v).m = (s), (v).a = (type*)realloc((v).a, sizeof(type) * (v).m))

#define kv_push(type, v, x) do { \
if ((v).n == (v).m) { \
(v).m = (v).m? (v).m<<1 : 2; \ (v).a = (type*)realloc((v).a, sizeof(type) * (v).m); \ } \ (v).a[(v).n++] = (x); \ } while (0) #define kv_pushp(type, v) (((v).n == (v).m)? \ ((v).m = ((v).m? (v).m<<1 : 2), \ (v).a = (type*)realloc((v).a, sizeof(type) * (v).m), 0) \ : 0), ((v).a + ((v).n++)) #define kv_a(type, v, i) ((v).m <= (i)? \ ((v).m = (v).n = (i) + 1, kv_roundup32((v).m), \ (v).a = (type*)realloc((v).a, sizeof(type) * (v).m), 0) \ : (v).n <= (i)? (v).n = (i) \ : 0), (v).a[(i)] #endif [/sourcecode]

Read Full Post »

Calculating Median

Here is an example that google does not give me the result in the first page. I want to know how to calculate median efficiently, and so I search “c calculate median”. In the first result page, google brings me to several forums which only show very naive implementations. The 11th result, this page, is the truely invaluable one which should be favoured by most programmers. I do not want to replicate that website. I just want to show you a function that is adapted from quickselect.c on the website. This function calculates the k-smallest (0<=k<n) element in an array. Its time complexity is linear to the size of the array and in practice it runs much faster than sorting and then locating the k-smallest element.

type_t ks_ksmall(size_t n, type_t arr[], size_t kk)
{
	type_t *low, *high, *k, *ll, *hh, *middle;
	low = arr; high = arr + n - 1; k = arr + kk;
	for (;;) {
		if (high <= low) return *k;
		if (high == low + 1) {
			if (cmp(*high, *low)) swap(type_t, *low, *high);
			return *k;
		}
		middle = low + (high - low) / 2;
		if (lt(*high, *middle)) swap(type_t, *middle, *high);
		if (lt(*high, *low)) swap(type_t, *low, *high);
		if (lt(*low, *middle)) swap(type_t, *middle, *low);
		swap(type_t, *middle, *(low+1)) ;
		ll = low + 1; hh = high;
		for (;;) {
			do ++ll; while (lt(*ll, *low));
			do --hh; while (lt(*low, *hh));
			if (hh < ll) break;
			swap(type_t, *ll, *hh);
		}
		swap(type_t, *low, *hh);
		if (hh <= k) low = ll;
		if (hh >= k) high = hh - 1;
	}
}

In this funcion, type_t is a type, swap() swaps two values, and lt() is a macro or a function that returns true if and only if the first value is smaller.

Read Full Post »

Synopsis

Here is an simple example showing how to use khash.h library:

#include "khash.h"
KHASH_MAP_INIT_INT(32, char)
int main() {
	int ret, is_missing;
	khiter_t k;
	khash_t(32) *h = kh_init(32);
	k = kh_put(32, h, 5, &ret);
	if (!ret) kh_del(32, h, k);
	k = kh_get(32, h, 10);
	is_missing = (k == kh_end(h));
	k = kh_get(32, h, 5);
	kh_del(32, h, k);
	for (k = kh_begin(h); k != kh_end(h); ++k)
		if (kh_exist(h, k)) kh_value(h, k) = 1;
	kh_destroy(32, h);
	return 0;
}

The second line says we want to use a hash map with int as key and char as value. khash_t(int) is a type. kh_get() and kh_put() returns an iterator, or the position in the hash table. kh_del() erases the key-value in the bucket pointed by the iterator. kh_begin() and kh_end() return the begin and the end of iterator, respectively. And kh_exist() tests whether the bucket at the iterator is filled with a key-value. The APIs are not so concise in comparison to C++ template, but are very straightforward and flexible. How can this be done?

Achieving generic programming in C

The core part of khash.h is:

#define KH_INIT(name, key_t, val_t, is_map, _hashf, _hasheq) \
  typedef struct { \
    int n_buckets, size, n_occupied, upper_bound; \
    unsigned *flags; \
    key_t *keys; \
    val_t *vals; \
  } kh_##name##_t; \
  static inline kh_##name##_t *init_##name() { \
    return (kh_##name##_t*)calloc(1, sizeof(kh_##name##_t)); \
  } \
  static inline int get_##name(kh_##name##_t *h, key_t k) \
  ... \
  static inline void destroy_##name(kh_##name##_t *h) { \
    if (h) { \
      free(h->keys); free(h->flags); free(h->vals); free(h); \
    } \
  }

#define _int_hf(key) (unsigned)(key)
#define _int_heq(a, b) (a == b)
#define khash_t(name) kh_##name##_t
#define kh_init(name) init_##name()
#define kh_get(name, h, k) get_##name(h, k)
#define kh_destroy(name, h) destroy_##name(h)
...
#define KHASH_MAP_INIT_INT(name, val_t) \
	KH_INIT(name, unsigned, val_t, is_map, _int_hf, _int_heq)

In macro ‘KH_INIT’, name is a unique symbol that distinguishes hash tables of different types, key_t the type of key, val_t the type of value, is_map is 0 or 1 indicating whether to allocate memory for vals, _hashf is a hash function/macro and _hasheq the comparison function/macro. Macro ‘KHASH_MAP_INIT_INT’ is a convenient interface to hash with integer keys.

When ‘KHASH_MAP_INIT_INT(32, char)’ is used in a C source code file the following codes will be inserted:

  typedef struct {
    int n_buckets, size, n_occupied, upper_bound;
    unsigned *flags;
    unsigned *keys;
    char *vals;
  } kh_int_t;
  static inline kh_int_t *init_int() {
    return (kh_int_t*)calloc(1, sizeof(kh_int_t));
  }
  static inline int get_int(kh_int_t *h, unsigned k)
  ...
  static inline void destroy_int(kh_int_t *h) {
    if (h) {
      free(h->keys); free(h->flags); free(h->vals); free(h);
    }
  }

And when we call: ‘kh_get(int, h, 5)’, we are calling ‘get_int(h, 5)’ which is defined by calling KH_INIT(int) macro. In this way, we can effectively achieve generic programming with simple interfaces. As we use inline and macros throughout, the efficiency is not affected at all. In my hash table benchmark, it is as fast and light-weighted as the C++ implementation.

Other technical concerns

  • Solving collisions. I have discussed this in my previous post. I more like to achieve smaller memory and therefore I choose open addressing.
  • Grouping key-value pairs or not. In the current implementation, keys and values are kept in separated arrays. This strategy will cause additional cache misses when keys and values are retrieved twice. Grouping key-value in a struct is more cache efficient. However, the good side of separating keys and values is this avoids waste of memory when key type and value type cannot be aligned well (e.g. key is an integer while value is a character). I would rather trade speed a bit for smaller memory. In addition, it is not hard to use a struct has a key in the current framework.
  • Space efficient rehashing. Traditional rehashing requires to allocate one addition hash and move elements in the old hash to the new one. For most hash implementations, this means we need 50% extra working space to enlarge a hash. This is not necessary. In khash.h, only a new flags array is allocated on rehashing. Array keys and values are enlarged with realloc which does not claim more memory than the new hash. Keys and values are move from old positions to new positions in the same memory space. This strategy also helps to clear all buckets marked as deleted without changing the size of a hash.

Read Full Post »

As a Perl programmer, I enjoy a lot using hash tables. I keep this habit in C/C++ programming. Then what C/C++ hash libraries are available? How are they compared to each other? In this post, I will give a brief review of hash libraries and present a small benchmark showing their practical performance.

Hash table libraries

In C++, the most widely used hash table implementation is hash_map/set in SGI STL, which is part of the GCC compiler. Note that hash_map/set is SGI’s extention to STL, but is not part of STL. TR1 (technical report 1) tries to standardize hash tables. It provides unordered_map/set with similar API to hash_map/set. Most of TR1 routines are available since gcc-4.0. Google sparse hash is another C++ hash table template library with similar API to hash_map/set. It provides two implementations, one is efficient in speed and the other is in memory.

In contrast, there are few good C libraries around. I have tried SunriseDD, uthash, glibc hash table, hashit, Christopher Clark’s hashtable, glib hash table and ghthash. SunriseDD sounds a great library that implements a lock-free hash table. However, I am not sure how to install it or use it, although the code itself is well documented. Uthash is a single header file. It is quite complex to use and incompatiable with C++. It also lacks basic APIs such as counting how many elements in the hash table. Glibc hash and hashit seem to only implement static hash tables. Glibc hash even does not have deletion operation. Only glib hash, CC’s hashtable and ghthash implement most of common operations. And they still have their weakness in comparison to C++ implementations (see below).

Design of the benchmark

The benchmark is comprised of two experiments. In the first experiment, a random integer array of 5 million elements is generated with about 1.25 million distinct keys. Each element is then tested whether it is present in the hash. If the element is in the hash, it will be removed; otherwise, it will be inserted. 625,792 distinct keys will be in the hash after this process. To test performance on string input, I convert integers to strings with sprintf().

The second experiment is designed by Craig Silverstein, the author of sparsehash. I am using his source codes. This experiment tests the performance of insertion from zero sized hash, insertion from preallocated hash, replacement, query, query of empty hash, and removal.

Results

The following table gives the results in the first experiment:

Library Mac-intCPU (sec) Mac-strCPU (sec) Mac PeakMem (MB) Linux-intCPU (sec) Linux-strCPU (sec) Linux PeakMem (MB)
glib 1.904 2.436 11.192 3.490 4.720 24.968
ghthash 2.593 2.869 29.0/39.0 3.260 3.460 61.232
CC’s hashtable 2.740 3.424 59.756 3.040 4.050 129.020
TR1 1.371 2.571 16.140 1.750 3.300 28.648
STL hash_set 1.631 2.698 14.592 2.070 3.430 25.764
google-sparse 2.957 6.098 4.800 2.560 6.930 5.42/8.54
google-dense 0.700 2.833 24.616 0.550 2.820 24.7/49.3
khash (C++) 1.089 2.372 6.772 1.100 2.900 6.88/13.1
khash (C) 0.987 2.294 6.780 1.140 2.940 6.91/13.1
STL set (RB) 5.898 12.978 19.868 7.840 18.620 29.388
kbtree (C) 3.080 13.413 3.268 4.260 17.620 4.86/9.59
NP’s splaytree 8.455 23.369 8.936 11.180 27.610 19.024

Notes:

  • Please be aware that changing the size of input data may change the ranking of speed and memory. The speed of a library may vary up to 10% in two different runs.
  • CPU time is measured in seconds. Memory denotes the peak memory, measured in MB.
  • For string hash, only the pointer to a string is inserted. Memory in the table does not count the space used by strings.
  • If two numbers are given for memory, the first is for integer keys and the second for string keys.
  • For all C++ libraries and khash.h, one operation is needed to achieve “insert if absent; delete otherwise”. Glib and ghthash require two operations, which does not favour these two libraries.
  • The speed may also be influenced by the efficiency of hash funtions. Khash and Glib use the same hash function. TR1/SGI-STL/google-hash use another hash function. Fortunately, to my experiment, the two string hash functions have quite similar performance and so the benchmark reflects the performance of the overall hash libraries instead of just hash functions.
  • For glib and ghthash, what is inserted is the pointer to the integer instead of the integer itself.
  • Ghthash supports dynamic hash table. However, the results do not seem correct when this is switched on. I am using fixed-size hash table. This favours ghthash.
  • CC’s hashtable will force to free a key, which is not implemented in all the other libraries. This behaviour will add overhead on both speed and memory in my benchmark (but probably not in other applications). The memory is measured for integer keys.
  • This simple benchmark does not test the strength and weakness of splay tree.

And here is the result of the second experiment:

Library grow pred/grow replace fetch fetchnull remove Memory
TR1 194.2 183.9 30.7 15.6 15.2 83.4 224.6
STL hash_map 149.0 110.5 35.6 11.5 14.0 87.2 204.2
STL map 289.9 289.9 141.3 134.3 7.0 288.6 236.8
google-sparse 417.2 237.6 89.5 84.0 12.1 100.4 85.4
google-dense 108.4 39.4 17.8 8.3 2.8 18.0 256.0
khash (C++) 111.2 99.2 26.1 11.5 3.0 17.4 198.0

Notes:

  • CPU time is measured in nanosecond for each operation. Memory is measured by TCmalloc. It is the memory difference before and after the allocation of the hash table, instead of the peak memory.
  • In this experiment, integers are inserted in order and there are no collisions in the hash table.
  • All these libraries provide similar API.

Discussions

  • Speed and memory. The larger the hash table, the fewer collisions may occur and the faster the speed. For the same hash library, increasing memory always increases speed. When we compare two libraries, both speed and memory should be considered.
  • C vs. C++. All C++ implementations have similar API. It is also very easy to use for any type of keys. Both C libraries, ghthash and glib, can only keep pointers to the keys, which complicates API and increases memory especially for 64-bit systems where a pointer takes 8 bytes. In general, C++ libraries is perferred over C ones. Surprisingly, on 32-bit Mac OS X, glib outperforms TR1 and STL for string input. This might indicate that the glib implementation itself is very efficient, but just the lack of functionality in C affects the performance.
  • Generic programming in C. Except my khash.h, all the other C hash libraries use (void*) to achieve generic typing. Using void* is okey for strings, but will cause overhead for integers. This is why all C libraries, except khash.h, is slower than C++ libraries on integer keys, but close to on string keys.
  • Open addressing vs. chaining hash. Khash and google hash implement open addressing hash while the remaining implement chaining hash. In open addressing hash, the size of each bucket equals the size of a key plus 0.25 byte. Google sparsehash further compresses unused bucket to 1 bit, achieving high memory efficiency. In chaining hash, the memory overhead of each bucket is at least 4 bytes on 32bit machines, or 8 bytes on 64bit machines. However, chaining hash is less affected when the hash table is nearly full. In practice, both open addressing and chaining hash occupy similar memory under similar speed. Khash takes less peak memory mainly due to its advanced technique in rehashing which reduces memory usage. So far as speed is concerned, chaining hash may have fewer comparison between keys. We can see this from the fact that the speed of chaining hash approaches that of open addressing hash on string keys but much slower on integer keys.
  • Memory usage of search trees. B-tree is the winner here. Each element in the B-tree only needs one additional pointer. When there are enough elements, a B-tree is at least halfly full; on average it should be around 75% full. And so on 64-bit systems, for a B-tree with N elements, we need additional N*8/0.75=10N bytes memory. Splay tree will need N*8*2=16N extra space. RB tree is the worst.
  • Other issues. a) Google hash becomes unbearably slow when I try to put a lot of strings in the hash table. All the other libraries do not have this problem. b) Google hash performs more comparisons than khash. This is obvious because google-dense is clearly faster on integer keys but comparable to khash on string keys.

Concluding remarks

  • C++ hash library is much easier to use than C libraries. This is definitely where C++ is preferred over C.
  • TR1 hash implementation is no faster than STL implementation. They may outperform one another under certain input or settings.
  • SGI hash_map is faster and takes less memory than STL map. Unless ordering is important, hash_map is a better container than map.
  • Google hash is a worthy choice when we understand why it is slow for many string keys.
  • My khash library, which is a single-file C++ template header, achieves good balance between speed and memory. All my source codes are available at the Programs page.

Update

  1. C interface can be elegant, too, if we implement it cleverly. See this post.
  2. I realize that we just need one lookup to achieve “insert if absent; delete otherwise”. This further improves the speed for all C++ libraries.
  3. I have analyzed google dense hash table in this post which explains why it is faster than khash on integer keys but close to or slower than on string keys.
  4. This thread directed me to gcc hashtable, and cocom hashtable. They are more or less independent of other source codes, but it would still take time to separate the source codes. So, I have not benchmarked them. Just keep a record.
  5. Python dictionary is in fact a hash table. The dictnotes.txt in that directory gives some quite interesting discussion about how to implement hash efficiently.
  6. hashlib library. A bit hard to use and I cannot get it running correctly. Possibly I have not provided a proper second hash function for rehashing.
  7. Added results for STL set (based on red-black tree) and John-Mark Gurney’s B-tree implementation (JG’s btree). Both libraries are considerably slower than hash tables. Of course search trees provide more functionality than hash tables, and every nice thing comes with a price. I have also tried Jason Evans’s and Niels Provos’ red-black tree implementations. On integer keys, JE’s takes 6.110 seconds on Mac-Intel using 18.884 MB memory and NP’s taks 6.611 seconds using the same amount of memory. This performance is close to that of STL set. They appear to be slower mainly due to the additional malloc/free calls I have to made under their APIs. Unlike hash table which have a variety of ways to implement it, red-black tree usually has one way (well, can be more. See also Jason’s blog.). And so I only show the performance of STL set as a representitive.
  8. Replaced JG’s B-tree with a modified version. The new version is both faster and more light-weighted.

Read Full Post »

Sorting algorithm

Given an array of size N, sorting can be done in O(N log(N)) in average. The most frequently used sorting algorithms that can achieve this time complexity are quicksort, heapsort and mergesort. They usually require O(log(N)), O(1) and O(N) working space, respectively (the space complexity of mergesort can be improved at the cost of speed). Most people believe quicksort is the fastest sorting algorithm. However, the fact is quicksort is only fast in terms of the number of swaps. When comparison is expensive, mergesort is faster than quicksort because mergesort uses less comparisons. GNU sort uses mergesort. Replacing it with a quicksort reduces the speed on typical text input. In addition, of the three algorithms, only mergesort is a stable sort. Stability is sometimes useful for a general tool like GNU sort.

The worst-case time complexity of quicksort is O(N^2). In practice, we combine quicksort and heapsort to avoid worst-case performance while retaining the fast average speed. The resulting algorithm is called introsort (introspective sort).

Implementation

The two most widely used implementations are glibc qsort and STL (unstable) introsort. Libc qsort calls a function for comparison. For simple comparison, a function call is expensive, which may greatly hurt the efficiency of qsort. STL sort does not have this problem. It is one of the fastest implementations I am aware of. My own implementation of introsort is similar but not as fast as STL introsort.

GNU sort implements a top-down recursive sort. On integer sorting, it is twice slower than introsort (see below). Iterative top-down mergesort is hard to implement. Iterative bottom-up mergesort is much easier. My implementation is a bottom-up one.

Paul Hsieh has also implemented quicksort, heapsort and mergesort. His implementation should be very efficient from what I can tell. To see whether my implementation is good enough, I copied and pasted his codes in my program, and applied “inline” where necessary.

Comparison

I designed a small benchmark on sorting 50 million random integers. As comparison is cheap in this case, the number of swaps dominate the performance. I compiled and run the program on three machines: MacIntel (Core2-2G/Mac/g++-4.2.1), LinuxIntel (Xeon-1.86G/Linux/g++-4.1.2) and LinuxAMD (Opteron-2G/Linux/g++-3.4.4). On all the three platforms, the program was compiled with “-O2 -fomit-frame-pointer”. The time (in seconds) spent on sorting is showed in the following table:

Algorithm MacIntel LinuxIntel LinuxAMD Linux_icc
STL sort 7.749 8.260 7.170 8.400
STL stable_sort 9.684 11.990 10.270 10.770
libc qsort 16.579 81.190 30.490 81.120
introsort 7.887 8.880 7.670 9.320
iterative mergesort 10.371 12.480 10.110 10.130
binary heapsort 36.651 45.710 42.460 40.820
combsort11 18.131 19.290 19.370 19.490
isort (func call) 16.760 17.380 13.390 16.740
isort (template func) 7.902 8.800 7.690 9.010
Paul’s heapsort 34.790 43.680 40.740 39.060
Paul’s quicksort 8.410 8.940 7.810 9.450
Paul’s mergesort 11.103 13.390 10.680 13.030

As for the algorithm itself, we can see that introsort is the fastest and heapsort is the slowest. Mergesort is also very fast. Combsort11 is claimed to approach quicksort, but I do not see this in sorting large integer arrays. As for the implementation of quicksort/introsort, STL is the best, with my implementation following very closely. Paul’s implmentation is also very efficient. Libc qsort is clearly slower, which cannot simply attribute to the use of function calls. My implementation with function calls, although slower than without function calls, outperforms libc qsort on both Linux machines. As for the implementation of mergesort, my version has similar performance to STL stable_sort. Note that stable_sort uses buffered recursive mergesort when a temporary array can be allocated. When memory is insufficient, it will use in-place mergesort which is not evaluated here.

Availability and alternative benchmarks

My implementation is available here as a single C++ template header file. The program for benchmark is also available. Programs in plain text can be acquired by chopping .html in the two links.

Paul Hsieh’s benchmark is here, including the original source codes. He also discussed how algorithms perform when the initial array is not completely random (I am one of “naive people” in his standard). Please note that in his benchmark, he was sorting an array of size 60,000 for 10000 times, while in my benchmark I more focus on very large arrays. Notably, heapsort approaches introsort on small arrays, but far slower on large arrays. Presumably this is because the bad cache performance of heapsort. Both quicksort and mergesort are very cache efficient.

In addition to Paul’s benchmark, you can also find alternative ones here and here. They seem to be more interested in the theoretical issues rather than efficient practical implementations.

If you search “benchmark sorting algorithms” in google, the first result is this page, which was implemented in D by Stewart Gordon. This benchmark aims to evaluate the performance on small arrays. It also tests the speed when the array is sorted or reverse sorted. However, the implementation is not optimized enough at least for quicksort. Using insertion sort when the array is nearly sorted is always preferred. You can also find this report from google search, but the implementation of quicksort is too naive to be efficient.

Concluding remarks

Although in the table introsort performs the best, we may want to use mergesort if we want to perform stable sorting, or the comparison is very expensive. Mergesort is also faster than introsort if the array is nearly sorted. STL sort seems to take particular care in this case, which makes it still fast when the array is sorted.

In common cases when comparison is cheap, introsort is the best choice. Of the various implementations, STL is the fastest. If you do not use STL or you just want to use C, you can use/adapt my implmentation which is very close to STL sort in speed. Do not use libc qsort, especially on Linux. It is not well implemented.

Update

  1. This website gives severl good implementations of sorting algorithms. I also believe the programmer behind is very capable. Highly recommended.

Read Full Post »

You can find good subroutines in GSL for multivariable nonlinear optimization, but the only method it provides for DFO is Nelder-Mead simplex method, which, claimed by Numerical Recipes, is “almost surely” slower than the Powell’s direct set method “in all likely applications”. You can find some DFO solvers here, but few of them are implemented in C/C++.

I have reimplemented a Hooke-Jeeve’s solver, adapted Powell’s direct set method from Numerical Recipes in C and ported NEWUOA solver from Fortran to C++ with f2c. Of the three solvers, Hooke-Jeeve’s method is the simplest. It simply searches the nearby regions around a given point and reduces radius in each iteration. It is purely heuristic and is hardly related to any “algorithm”. The Powell’s direct set method minimizes the objective function by minimizing along a direction using Brent’s method. It is believed to outperform Nelder-Mead method. NEWUOA is a much more advanced method. I do not know how it works, frankly. A benchmark shows that NEWUOA outperforms NMSMAX, an implementation of Nelder-Mead method, and APPSPACK as well. All the three solvers I implement work well on the problem I want to solve.

The three solvers are implemented as C++ template headers, one header file for each solver. The Hooke-Jeeve’s method is here and NEWUOA is here. I cannot release the source codes for Powell’s direct set as Numerical Recipes disallows this.

Read Full Post »

Older Posts »