Posts Tagged ‘C’


Here is an simple example showing how to use khash.h library:

#include "khash.h"
int main() {
	int ret, is_missing;
	khiter_t k;
	khash_t(32) *h = kh_init(32);
	k = kh_put(32, h, 5, &ret);
	if (!ret) kh_del(32, h, k);
	k = kh_get(32, h, 10);
	is_missing = (k == kh_end(h));
	k = kh_get(32, h, 5);
	kh_del(32, h, k);
	for (k = kh_begin(h); k != kh_end(h); ++k)
		if (kh_exist(h, k)) kh_value(h, k) = 1;
	kh_destroy(32, h);
	return 0;

The second line says we want to use a hash map with int as key and char as value. khash_t(int) is a type. kh_get() and kh_put() returns an iterator, or the position in the hash table. kh_del() erases the key-value in the bucket pointed by the iterator. kh_begin() and kh_end() return the begin and the end of iterator, respectively. And kh_exist() tests whether the bucket at the iterator is filled with a key-value. The APIs are not so concise in comparison to C++ template, but are very straightforward and flexible. How can this be done?

Achieving generic programming in C

The core part of khash.h is:

#define KH_INIT(name, key_t, val_t, is_map, _hashf, _hasheq) \
  typedef struct { \
    int n_buckets, size, n_occupied, upper_bound; \
    unsigned *flags; \
    key_t *keys; \
    val_t *vals; \
  } kh_##name##_t; \
  static inline kh_##name##_t *init_##name() { \
    return (kh_##name##_t*)calloc(1, sizeof(kh_##name##_t)); \
  } \
  static inline int get_##name(kh_##name##_t *h, key_t k) \
  ... \
  static inline void destroy_##name(kh_##name##_t *h) { \
    if (h) { \
      free(h->keys); free(h->flags); free(h->vals); free(h); \
    } \

#define _int_hf(key) (unsigned)(key)
#define _int_heq(a, b) (a == b)
#define khash_t(name) kh_##name##_t
#define kh_init(name) init_##name()
#define kh_get(name, h, k) get_##name(h, k)
#define kh_destroy(name, h) destroy_##name(h)
#define KHASH_MAP_INIT_INT(name, val_t) \
	KH_INIT(name, unsigned, val_t, is_map, _int_hf, _int_heq)

In macro ‘KH_INIT’, name is a unique symbol that distinguishes hash tables of different types, key_t the type of key, val_t the type of value, is_map is 0 or 1 indicating whether to allocate memory for vals, _hashf is a hash function/macro and _hasheq the comparison function/macro. Macro ‘KHASH_MAP_INIT_INT’ is a convenient interface to hash with integer keys.

When ‘KHASH_MAP_INIT_INT(32, char)’ is used in a C source code file the following codes will be inserted:

  typedef struct {
    int n_buckets, size, n_occupied, upper_bound;
    unsigned *flags;
    unsigned *keys;
    char *vals;
  } kh_int_t;
  static inline kh_int_t *init_int() {
    return (kh_int_t*)calloc(1, sizeof(kh_int_t));
  static inline int get_int(kh_int_t *h, unsigned k)
  static inline void destroy_int(kh_int_t *h) {
    if (h) {
      free(h->keys); free(h->flags); free(h->vals); free(h);

And when we call: ‘kh_get(int, h, 5)’, we are calling ‘get_int(h, 5)’ which is defined by calling KH_INIT(int) macro. In this way, we can effectively achieve generic programming with simple interfaces. As we use inline and macros throughout, the efficiency is not affected at all. In my hash table benchmark, it is as fast and light-weighted as the C++ implementation.

Other technical concerns

  • Solving collisions. I have discussed this in my previous post. I more like to achieve smaller memory and therefore I choose open addressing.
  • Grouping key-value pairs or not. In the current implementation, keys and values are kept in separated arrays. This strategy will cause additional cache misses when keys and values are retrieved twice. Grouping key-value in a struct is more cache efficient. However, the good side of separating keys and values is this avoids waste of memory when key type and value type cannot be aligned well (e.g. key is an integer while value is a character). I would rather trade speed a bit for smaller memory. In addition, it is not hard to use a struct has a key in the current framework.
  • Space efficient rehashing. Traditional rehashing requires to allocate one addition hash and move elements in the old hash to the new one. For most hash implementations, this means we need 50% extra working space to enlarge a hash. This is not necessary. In khash.h, only a new flags array is allocated on rehashing. Array keys and values are enlarged with realloc which does not claim more memory than the new hash. Keys and values are move from old positions to new positions in the same memory space. This strategy also helps to clear all buckets marked as deleted without changing the size of a hash.

Read Full Post »

Generic Programming in C

Template in C++ is the single reason that I still keep using it. Previously, I thought generic programming in C is nothing but ugly and painful. Now I have changed my mind a bit, in the light of tree.h written by Niels Provos. Generic programming in C can be done without much pain and with just slightly less elegance in comparison to C++ implementations. How can this be done? Macros, of course. But in what form macros are used is where all the tricks come in.

The first way to achieve generic programming is to pass a type to macros. Jason Evansrb.h is an example. Each operation on an RB tree is a macro. Users have to provide the type of the data in the tree and a comparison function with each macro. It is not hard to think of this way, but we can do better.

InĀ tree.h, Niels gives a better solution: to use token concatenation. The key macro is SPLAY_PROTOTYPE(name, type, field, cmp). It is a huge macro that defines several operations, in the form of “static inline” functions, on the splay tree. These functions will be inserted to the C source code which uses the macro. Using SPLAY_PROTOTYPE() with different “name”s will insert different functions. For example, when “SPLAY_PROTOTYPE(int32, int, data, intcmp)” is invoked, the insertion function will be “int32_SPLAY_INSERT()”. Splay trees with different “name”s can coexist in one C source code because their operations have different names. At the end of tree.h, Niels further defines “#define RB_INSERT(name, x, y) name##_RB_INSERT(x, y)”. Then In the C source code, we can call insertion with “RB_INSERT(int32, x, y)”. In comparison to a C++ template implementation, the only line you need to add is SPLAY_PROTOTYPE(). Calling operations is as easy.

I will further explain this idea when I present my khash implementation in C.

Read Full Post »

As a Perl programmer, I enjoy a lot using hash tables. I keep this habit in C/C++ programming. Then what C/C++ hash libraries are available? How are they compared to each other? In this post, I will give a brief review of hash libraries and present a small benchmark showing their practical performance.

Hash table libraries

In C++, the most widely used hash table implementation is hash_map/set in SGI STL, which is part of the GCC compiler. Note that hash_map/set is SGI’s extention to STL, but is not part of STL. TR1 (technical report 1) tries to standardize hash tables. It provides unordered_map/set with similar API to hash_map/set. Most of TR1 routines are available since gcc-4.0. Google sparse hash is another C++ hash table template library with similar API to hash_map/set. It provides two implementations, one is efficient in speed and the other is in memory.

In contrast, there are few good C libraries around. I have tried SunriseDD, uthash, glibc hash table, hashit, Christopher Clark’s hashtable, glib hash table and ghthash. SunriseDD sounds a great library that implements a lock-free hash table. However, I am not sure how to install it or use it, although the code itself is well documented. Uthash is a single header file. It is quite complex to use and incompatiable with C++. It also lacks basic APIs such as counting how many elements in the hash table. Glibc hash and hashit seem to only implement static hash tables. Glibc hash even does not have deletion operation. Only glib hash, CC’s hashtable and ghthash implement most of common operations. And they still have their weakness in comparison to C++ implementations (see below).

Design of the benchmark

The benchmark is comprised of two experiments. In the first experiment, a random integer array of 5 million elements is generated with about 1.25 million distinct keys. Each element is then tested whether it is present in the hash. If the element is in the hash, it will be removed; otherwise, it will be inserted. 625,792 distinct keys will be in the hash after this process. To test performance on string input, I convert integers to strings with sprintf().

The second experiment is designed by Craig Silverstein, the author of sparsehash. I am using his source codes. This experiment tests the performance of insertion from zero sized hash, insertion from preallocated hash, replacement, query, query of empty hash, and removal.


The following table gives the results in the first experiment:

Library Mac-intCPU (sec) Mac-strCPU (sec) Mac PeakMem (MB) Linux-intCPU (sec) Linux-strCPU (sec) Linux PeakMem (MB)
glib 1.904 2.436 11.192 3.490 4.720 24.968
ghthash 2.593 2.869 29.0/39.0 3.260 3.460 61.232
CC’s hashtable 2.740 3.424 59.756 3.040 4.050 129.020
TR1 1.371 2.571 16.140 1.750 3.300 28.648
STL hash_set 1.631 2.698 14.592 2.070 3.430 25.764
google-sparse 2.957 6.098 4.800 2.560 6.930 5.42/8.54
google-dense 0.700 2.833 24.616 0.550 2.820 24.7/49.3
khash (C++) 1.089 2.372 6.772 1.100 2.900 6.88/13.1
khash (C) 0.987 2.294 6.780 1.140 2.940 6.91/13.1
STL set (RB) 5.898 12.978 19.868 7.840 18.620 29.388
kbtree (C) 3.080 13.413 3.268 4.260 17.620 4.86/9.59
NP’s splaytree 8.455 23.369 8.936 11.180 27.610 19.024


  • Please be aware that changing the size of input data may change the ranking of speed and memory. The speed of a library may vary up to 10% in two different runs.
  • CPU time is measured in seconds. Memory denotes the peak memory, measured in MB.
  • For string hash, only the pointer to a string is inserted. Memory in the table does not count the space used by strings.
  • If two numbers are given for memory, the first is for integer keys and the second for string keys.
  • For all C++ libraries and khash.h, one operation is needed to achieve “insert if absent; delete otherwise”. Glib and ghthash require two operations, which does not favour these two libraries.
  • The speed may also be influenced by the efficiency of hash funtions. Khash and Glib use the same hash function. TR1/SGI-STL/google-hash use another hash function. Fortunately, to my experiment, the two string hash functions have quite similar performance and so the benchmark reflects the performance of the overall hash libraries instead of just hash functions.
  • For glib and ghthash, what is inserted is the pointer to the integer instead of the integer itself.
  • Ghthash supports dynamic hash table. However, the results do not seem correct when this is switched on. I am using fixed-size hash table. This favours ghthash.
  • CC’s hashtable will force to free a key, which is not implemented in all the other libraries. This behaviour will add overhead on both speed and memory in my benchmark (but probably not in other applications). The memory is measured for integer keys.
  • This simple benchmark does not test the strength and weakness of splay tree.

And here is the result of the second experiment:

Library grow pred/grow replace fetch fetchnull remove Memory
TR1 194.2 183.9 30.7 15.6 15.2 83.4 224.6
STL hash_map 149.0 110.5 35.6 11.5 14.0 87.2 204.2
STL map 289.9 289.9 141.3 134.3 7.0 288.6 236.8
google-sparse 417.2 237.6 89.5 84.0 12.1 100.4 85.4
google-dense 108.4 39.4 17.8 8.3 2.8 18.0 256.0
khash (C++) 111.2 99.2 26.1 11.5 3.0 17.4 198.0


  • CPU time is measured in nanosecond for each operation. Memory is measured by TCmalloc. It is the memory difference before and after the allocation of the hash table, instead of the peak memory.
  • In this experiment, integers are inserted in order and there are no collisions in the hash table.
  • All these libraries provide similar API.


  • Speed and memory. The larger the hash table, the fewer collisions may occur and the faster the speed. For the same hash library, increasing memory always increases speed. When we compare two libraries, both speed and memory should be considered.
  • C vs. C++. All C++ implementations have similar API. It is also very easy to use for any type of keys. Both C libraries, ghthash and glib, can only keep pointers to the keys, which complicates API and increases memory especially for 64-bit systems where a pointer takes 8 bytes. In general, C++ libraries is perferred over C ones. Surprisingly, on 32-bit Mac OS X, glib outperforms TR1 and STL for string input. This might indicate that the glib implementation itself is very efficient, but just the lack of functionality in C affects the performance.
  • Generic programming in C. Except my khash.h, all the other C hash libraries use (void*) to achieve generic typing. Using void* is okey for strings, but will cause overhead for integers. This is why all C libraries, except khash.h, is slower than C++ libraries on integer keys, but close to on string keys.
  • Open addressing vs. chaining hash. Khash and google hash implement open addressing hash while the remaining implement chaining hash. In open addressing hash, the size of each bucket equals the size of a key plus 0.25 byte. Google sparsehash further compresses unused bucket to 1 bit, achieving high memory efficiency. In chaining hash, the memory overhead of each bucket is at least 4 bytes on 32bit machines, or 8 bytes on 64bit machines. However, chaining hash is less affected when the hash table is nearly full. In practice, both open addressing and chaining hash occupy similar memory under similar speed. Khash takes less peak memory mainly due to its advanced technique in rehashing which reduces memory usage. So far as speed is concerned, chaining hash may have fewer comparison between keys. We can see this from the fact that the speed of chaining hash approaches that of open addressing hash on string keys but much slower on integer keys.
  • Memory usage of search trees. B-tree is the winner here. Each element in the B-tree only needs one additional pointer. When there are enough elements, a B-tree is at least halfly full; on average it should be around 75% full. And so on 64-bit systems, for a B-tree with N elements, we need additional N*8/0.75=10N bytes memory. Splay tree will need N*8*2=16N extra space. RB tree is the worst.
  • Other issues. a) Google hash becomes unbearably slow when I try to put a lot of strings in the hash table. All the other libraries do not have this problem. b) Google hash performs more comparisons than khash. This is obvious because google-dense is clearly faster on integer keys but comparable to khash on string keys.

Concluding remarks

  • C++ hash library is much easier to use than C libraries. This is definitely where C++ is preferred over C.
  • TR1 hash implementation is no faster than STL implementation. They may outperform one another under certain input or settings.
  • SGI hash_map is faster and takes less memory than STL map. Unless ordering is important, hash_map is a better container than map.
  • Google hash is a worthy choice when we understand why it is slow for many string keys.
  • My khash library, which is a single-file C++ template header, achieves good balance between speed and memory. All my source codes are available at the Programs page.


  1. C interface can be elegant, too, if we implement it cleverly. See this post.
  2. I realize that we just need one lookup to achieve “insert if absent; delete otherwise”. This further improves the speed for all C++ libraries.
  3. I have analyzed google dense hash table in this post which explains why it is faster than khash on integer keys but close to or slower than on string keys.
  4. This thread directed me to gcc hashtable, and cocom hashtable. They are more or less independent of other source codes, but it would still take time to separate the source codes. So, I have not benchmarked them. Just keep a record.
  5. Python dictionary is in fact a hash table. The dictnotes.txt in that directory gives some quite interesting discussion about how to implement hash efficiently.
  6. hashlib library. A bit hard to use and I cannot get it running correctly. Possibly I have not provided a proper second hash function for rehashing.
  7. Added results for STL set (based on red-black tree) and John-Mark Gurney’s B-tree implementation (JG’s btree). Both libraries are considerably slower than hash tables. Of course search trees provide more functionality than hash tables, and every nice thing comes with a price. I have also tried Jason Evans’s and Niels Provos’ red-black tree implementations. On integer keys, JE’s takes 6.110 seconds on Mac-Intel using 18.884 MB memory and NP’s taks 6.611 seconds using the same amount of memory. This performance is close to that of STL set. They appear to be slower mainly due to the additional malloc/free calls I have to made under their APIs. Unlike hash table which have a variety of ways to implement it, red-black tree usually has one way (well, can be more. See also Jason’s blog.). And so I only show the performance of STL set as a representitive.
  8. Replaced JG’s B-tree with a modified version. The new version is both faster and more light-weighted.

Read Full Post »

Just now I got an email from a mailing list, saying that C++ helps to greatly reduce coding time in comparison to C. I have heard a lot about this argument. But is that true?

C++ can possibly accelerate development in two ways: firstly, OOP (Object-Oriented Programming) helps to organize large projects, and secondly, STL (Standard Template Library) saves time on reimplementing frequently used subroutines. However, I do not find C++ OOP greatly helps me. To me, it is not right to clearly classify a programming language as a procedure-oriented or object-oriented language. It is only right to say a development methodology is procedure-oriented or object-oriented. We can effectively mimic the fundamental OOP ideas in C, a socalled procedure-oriented language, by packaging related data in a struct and transfer the a pointer to the struct to subroutines. I know C++ programmers would argue doing in this way is far from OOP, but it has captured the essence of OOP and in practice sufficient to organize large projects with this simple and natural idea. The large amount of existing C projects, such as Linux kernel, gcc and Emacs, prove this is the truth. With OOP ideas, we can use C to organize large projects without difficulty. C++ does not provide more power except introducing more complicated concepts.

I do not use STL most of time. I have implemented most of useful subroutines in C/C++ by myself. I actually spend less time in using my own library than using STL as I am very familiar with my own codes. Of course, implementing an efficient and yet generic library by myself takes a lot of time, but I really learn a lot in this invaluable process. I can hardly imagine how a programmer who does not get a firm grasp of data structures, which can only be achieved by implementing by him/herself, can ever write good programs. To this end, I agree that for elementary programmers using STL reduces coding time; but this is achieved at the cost of weakening the ability to write better programs. And for an advanced programmer, using STL may help but probably does not save much time.

Note that I am not saying C++ is a bad language as a whole. In fact, I use C++ template functions a lot and C++ template classes at times. In this post, I just want to emphasize the importantance to focusing on the art of programming instead of on the artificial concepts or on the degree of laziness a language can provide.

Read Full Post »

GNU sort is one of my favorite program. It is fast and highly flexible. However, when I try to sort chromosome names, it becomes a pain. In bioinformatics, chromosomes are usually named as chr1, chr2, …, chr10, chr11, … chr20, …, chrX and chrY. It seems to me that there is no way to sort these names in the above order. Finally, I decide to modify GNU sort. I separate sort source codes from textutils-1.22 because this version is less dependent on other packages.

The string comparison function is:

static int mixed_numcompare(const char *a, const char *b)
  char *pa, *pb;
  pa = (char*)a; pb = (char*)b;
  while (*pa && *pb) {
    if (isdigit(*pa) && isdigit(*pb)) {
      long ai, bi;
      ai = strtol(pa, &pa, 10);
      bi = strtol(pb, &pb, 10);
      if (ai != bi) return ai<bi? -1 : ai>bi? 1 : 0;
    } else {
      if (*pa != *pb) break;
      ++pa; ++pb;
  if (*pa == *pb)
  return (pa-a) < (pb-b)? -1 : (pa-a) > (pb-b)? 1 : 0;
  return *pa<*pb? -1 : *pa>*pb? 1 : 0;

It does numerical comparison for digits and string comparison for other characters. With this comparison, chromosome names can be sorted in the desired way. I add a new command line option -N (or -k1,1N) to trigger string-digits mixed comparison.

In addition, I also replace the top-down recursive mergesort with a bottom-up iterative sort, and use heap to accelerate merging. The improved sort is a little faster than the orginal version.

The improved sort can be downloaded here, distributed under GPL.

Read Full Post »

I was ignorant. An hour ago, I thought it is impossible to implement a garbage collector (GC) for C, but this is certainly wrong.

For an interpretated language like Perl, it is cheap to keep track of memory that is not referenced and therefore it is not so hard to identify and free unused memory in most cases except circular referencing. Java disallows pointers, of course including internal pointers. Objects out of the scope can be easily identified and freed. C is very different. At the first sight, it is impossible to directly tell where pointer variables point to. Then how to identify unused memory? This page gives the answer: we can scan registers, stacks and static data regions to collect information on pointers. Knowing this information makes it possible to implement a GC for C. The most famous implementation is the Boehm-Demers-Weiser GC library. A third-party review shows that this GC may outperform manual memory management. It also thoroghly discusses the advantages and disadvantages of this library in the end. The memory management reference is another website that provides insight into GC.

Probably I will not use GC in C. Although GC can be faster, its behaviour is less predictable than manual memory management. This makes me feel uneasy when I am used to controlling the memory. What is more important, BDW GC seems not to do boundary check. When such an error occurs, it will be very difficult to identify the problem when GC effectively cripples valgrind which should pinpoint the error otherwise.

Read Full Post »

A colleague of mine just told me that C++ iostream is typically an order of magnitude slower than printf. His example shows that printing out a string like “%s\t%d\tabc\t%s\t%s\n” with C++ iostream is 3 times slower than printf in Perl! This observation agrees my experience, although I have never done any benchmark. I abandoned iostream after I tried it the first time in my program.

Update: In another test, C++ iostream is ~30% slower, which is not too bad. Anyway, be aware that C++ iostream can be very slow in some cases. This thread also provides helpful information.

Read Full Post »

C pointer is the most powerful and nasty concept. Whether marstering C points or not separate intermediate C programers from the elementary ones. Want to know whether you have mastered C pointers? Have a look at this program. If the basic idea is clear to you, you are qualified to be an intermediate programmer. If you have difficulty, you should learn hard from other programmers. It is unwise to study this program as a beginner. The catch in the program is too complicated.

This program is adapted from an example in the C bible “The C Programming Language” written by Kernighan & Ritchie. I commented it and extended its function a bit. This allocator is usually not as efficient as malloc family that comes with the system, but it is good enough for a lot of practical applications. Also, this allocator is simple and clear. You can largely predict its behaviour. In comparison, it is not always easy to understand malloc is doing.

I came to know this allocator from Phil Green’s Phrap assembler. I then found the book and reimplemented in my way.

Read Full Post »

« Newer Posts