Posts Tagged ‘C’

Weekend project: K8 revived

Around a weekend two years ago, I wrote a Javascript shell, K8, based on Google’s V8 Javascript engine. It aimed to provide basic file I/O that was surprisingly lacking from nearly all Javascript shells that time. I have spent little time on that project since then. K8 is not compatible with the latest V8 any more.

Two years later, the situation of Javascript shells has not been changed much. Most of them, including Dart, still lack usable file I/O for general-purpose text processing, one of the most fundamental functionality in other programming languages from the low-level C to Java/D to the high-level Perl/Python. Web developers seem to follow a distinct programming paradigm in comparison to typical Unix programmers and programmers in my field.

This weekend, I revived K8, partly as an exercise and partly as my response to the appropriate file I/O APIs in Javascript. K8 is written in a 600-line C++ file. It is much smaller than other JS shells, but it provides features that I need most but lack from Javascript and other JS shells. You can find the docuemtation from K8 github. I will only show an example:

var x = new Bytes(), y = new Bytes();
x.set('foo'); x.set([0x20,0x20]); x.set('bar'); x.set('F', 0); x[3]=0x2c;
print(x.toString())   // output: 'Foo, bar'
y.set('BAR'); x.set(y, 5)
print(x)              // output: 'Foo, BAR'
x.destroy(); y.destroy()

if (arguments.length) { // read and print file
  var x = new Bytes(), s = new iStream(new File(arguments[0]));
  while (s.readline(x) >= 0) print(x)
  s.close(); x.destroy();

Read Full Post »

A Generic Buffered Stream Wrapper

In C programming, the main difference between low-level I/O functions (open/close/read/write) and stream-level I/O functions (fopen/fclose/fread/fwrite) is that stream-level functions are buffered. Presumably, low-level I/O functions will incur a disk operation on each read(). Although the kernel may cache this, we cannot rely too much on it. Disk operations are expensive and so low-level I/O does not provide fgetc equivalent.

Stream-level I/O functions have a buffer. On reading, they load a block of data from disk to memory. If at a fgetc() call the data have been retrieved to memory, it will not incur a disk operation, which greatly improves the efficiency.

Stream-level I/O functions are part of the standard C library. Why do we need a new wrapper? Three reasons. First, when you work with an alternative I/O library (such as zlib or libbzip2) which do not come with buffered I/O routines, you probably need a buffered wrapper to make your code efficient. Second, using a generic wrapper makes your code more flexible when you want to change the type of input stream. For example, you may want to write a parser that works on a normal stream, a zlib-compressed stream and on a C string. Using a unified stream wrapper will simplify coding. Third, my feeling is most of steam-level I/O functions in stdio.h are not conventient given that they cannot enlarge a string automatically. In a lot of cases, I need to read one line but I do not know how long a line can be. Managing this case is not so hard, but doing this again and again is boring.

In the end, I come up with my own buffered wrapper for input streams. It is generic in that it works on all types of I/O steams with a read() call (or equivalent), or even on a C string. I show an example here without much explanation. I may expand this post in future. Source codes can be found in my programs page.

#include <fcntl.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include "kstream.h"
// arguments: type of the stream handler,
//   function to read a block, size of the buffer
KSTREAM_INIT(int, read, 10)

int main()
	int fd;
	kstream_t *ks;
	kstring_t str;
	bzero(&str, sizeof(kstring_t));
	fd = open("kstream.h", O_RDONLY);
	ks = ks_init(fd);
	while (ks_getuntil(ks, '\n', &str, 0) >= 0)
		printf("%s\n", str.s);
	return 0;

Read Full Post »

This is a follow-up of my previous post. Here I change the table to several charts. Hope it seems more friendly to readers. You can find the links to these libraries in that table. Their source codes, including my testing code, are available here. You may also want to see my previous posts in the last few days for my interpretation to the results.

On C string (char*) keys, I fail to use JE_rb_old and JE_rb_new to get the correct result on Mac and so they are not showed in the charts. I would really appreciate if someone may give me the correct implementation using these libraries. In addition, tr1_unordered_map uses a lot of memory according to my program. The memory for string keys are faked.

For conveniece, here are some brief descriptions of these libraries (with no order):

  • google_dense and google_sparse: google’s sparsehash library. Google_dense is fast but memory hungery while google_sparse is the opposite.
  • sgi_hash_map and sgi_map: SGI’s STL that comes with g++-4. The backend of sgi_map is a three-pointer red-black tree.
  • tr1::unordered_map: GCC’s TR1 library that comes with g++-4. It implements a hash table.
  • rdestl::hash_map: from RDESTL, another implementation of STL.
  • uthash: a hash library in C
  • JG_btree: John-Mark Gurney’s btree library.
  • JE_rb_new, JE_rb_old, JE_trp_hash and JE_trp_prng: Jason Evans’ binary search tree libraries. JE_rb_new implements a left-leaning red-black tree; JE_rb_old a three-pointer red-black tree; both JE_trp_hash and JE_trp_prng implement treaps but with different strategies on randomness.
  • libavl_rb, libavl_prb, libavl_avl and libavl_bst: from GNU libavl. They implment a two-pointer red-black tree, a three-pointer red-black tree, an AVL tree and a unbalanced binary search tree, respectively.
  • NP_rbtree and NP_splaytree: Niels Provos’ tree library for FreeBSD. A three-pointer red-black tree and a splay tree.
  • TN_rbtree: Thomas Niemann’s red-black tree. I ported it to C++.
  • sglib_rbtree: from SGLIB. It implements a two-pointer recursive red-black tree (all the other binary search trees are implemented without recursion).
  • libavl_avl_cpp, libavl_rb_cpp and libavl_rb_cpp2: incomplete C++ version of libavl (no iterator), ported by me. Libavl_rb_cpp2 further uses the same technique in JE_rb_new to save the color bit. Source codes available in the package.
  • khash and kbtree: my hash table and B-tree implementation. kbtree is based on JG_rbtree.

Read Full Post »

I was wondering whether retrieving an element in a struct will incur additional overhead. And so I did the following experiment. Here the same array is sorted in two ways: with or without data retrieving from a struct. Both ways yield identical results. The question is whether the compiler knows the two ways are the same and can achieve the same efficiency.

#include “ksort.h”

typedef struct {
int a;
} myint_t;

#define myint_lt(_a, _b) ((_a).a < (_b).a) KSORT_INIT_GENERIC(int) KSORT_INIT(my, myint_t, myint_lt) int main() { int i, N = 10000000; myint_t *a; clock_t t; a = (myint_t*)malloc(sizeof(myint_t) * N); srand48(11); for (i = 0; i != N; ++i) a[i].a = lrand48(); t = clock(); ks_introsort(int, N, (int*)a); printf("%.3lf\n", (double)(clock() - t) / CLOCKS_PER_SEC); srand48(11); for (i = 0; i != N; ++i) a[i].a = lrand48(); t = clock(); ks_introsort(my, N, a); printf("%.3lf\n", (double)(clock() - t) / CLOCKS_PER_SEC); free(a); return 0; } [/sourcecode] Here is the speed with different compilers on different CPUs (first value for without data retrieving and second with):

  • Mac-Intel, gcc-4.0, -O2: 1.422 sec vs. 1.802 sec
  • Mac-Intel, gcc-4.2, -O2: 1.438 vs. 1.567
  • Mac-Intel, gcc-4.0, -O2 -fomit-frame-pointer: 1.425 vs. 1.675
  • Mac-Intel, gcc-4.2, -O2 -fomit-frame-pointer: 1.438 vs. 1.448
  • Linux-Intel, gcc-4.1, -O2: 1.600 vs. 1.520
  • Linux-Intel, gcc-4.1, -O2 -fomit-frame-pointer: 1.620 vs. 1.530
  • Linux-Intel, icc, -O2 -fomit-frame-pointer: 1.600 vs. 1.580

The conclusion is retrieving data from a struct may have marginal overhead in comprison to direct data access. However, a good compiler can avoid this and produce nearly optimal machine code. Using “-fomit-frame-pointer” may help for some machines, but not for others. In addition, it is a bit surprising to me that gcc-linux generates faster code for data retrieval in a struct. Swapping the two ways does not change the conclusion.

Read Full Post »

Over the weekend, I have done a more comprehensive benchmark of various libraries on search trees. Two AVL, seven red-black tree, one Splay tree, two treap implementations are involved, together with seven hash table libraries. As I need to present a big table, I have to write it in a free-style HTML page. You can find the complete benchmark here and all the source codes here. I only copy the “concluding remarks” in the benchmark page as follows:

  • Hash table is preferred over search trees if we do not require order.
  • In applications similar to my example, B-tree is better than most of binary search trees in terms of both speed and memory.
  • AVL tree and red-black tree are the best general-purposed BSTs. They are very close in efficiency.
  • For pure C libraries, using macros is usually more efficient than using void* to achieve generic programming.

You can find the result and much more discussions in that page. If you think the source codes or the design of benchmark can be improved, please leave comments here or send me E-mail. In addition, I failed to use several libraries and so you can see some blank in the table. I would also appreciate if someone could show me how to use those libraries correctly.

Read Full Post »

B-tree vs. Binary Search Tree

When talking about in-memory search tree, we usually think of various binary search trees: red-black tree, AVL tree, treap, splay tree and so on. We do not often think of B-tree, as B-tree is commonly introduced as an on-disk data structure rather than in-memory one. Is B-tree also a good data structure for in-memory ordered dictionary? I used to search for the performance comparison between B-tree and binary search trees, but ended up with nothing useful. It seems that only I am interested in such comparison and so I have to do it by myself.

I found John-Mark Gurney’s B-tree via google search. It is well coded and full of clever ideas. The original version has small memory footprint, but it is not as fast as STL’s red-black tree. I studied this source codes and think I should be able to further optimize it. In the end, I got my kbtree.h macro library. As you can see in my hash table benchmark, the modified version beats STL set while using even smaller memory than the original version. I think I am now at the position to say: at least for some applications, B-tree is a better ordered data structure than most of binary search trees.

The most attractive feature of B-tree is its small memory usage. A binary tree needs at least two pointers for each record, which amounts to 16N on a modern 64-bit systems. A B-tree only needs one pointer. Although in a B-tree each node may not be full, a sufficiently large B-tree should be at least 50% full by definition and in average around 75% full. On a 64-bit system, the extra memory is only 8N/0.75+KN(1/0.75-1)=(10+0.3K)N, where K is the size of a key. In fact we can do even better as we do not need to allocate the null pointers in leaves. The practical memory overhead can be reduced to below (5+0.3K)N (in fact, as the majority of nodes in a B-tree are leaves, the factor 5 should be smaller in practice), far better than a binary search tree. On speed, no binary search tree with just two additional pointers (splay tree and hash treap) can achieve the best performance. We usually need additional information at each node (AVL tree and standard red-black tree) or a random number (treap) to get good performance. B-tree is different. It is even faster than the standard red-black tree while still using (5+0.3K)N extra memory! People should definitely pay more attention to B-tree.

Update: The modified B-tree is available here (HTML) as a single C header file. Example is here. Currently, the APIs are not very friendly but are ready to use. In case you want to give a try. Note that you must make sure the key is absent in the tree before kb_put() and make sure the key is present in the tree before calling kb_del().

Someone has corrected me. STL is a specification, not an implementation. By STL in my blog, I always mean SGI STL, the default STL that comes with GCC.

Over the weekend I have done a more complete benchmark of various libraries on search trees and hash tables. Please read this post if you are interested.

I realize that a lot of red-black tree implementations do not need a parent pointer, although SGI STL’s one uses. My comment below is somewhat wrong.

Read Full Post »

I do not see much need to have a vector container in C as a vector is simply an array and array operations are all very simple. Nontheless, it might still better to implement one, for the sake of completeness. Here is the code. The library is almost as fast as the fastest code you can write in C.

#ifndef AC_KVEC_H
#define AC_KVEC_H


#define kv_roundup32(x) (–(x), (x)|=(x)>>1, (x)|=(x)>>2, (x)|=(x)>>4, (x)|=(x)>>8, (x)|=(x)>>16, ++(x))

#define kvec_t(type) struct { uint32_t n, m; type *a; }
#define kv_init(v) ((v).n = (v).m = 0, (v).a = 0)
#define kv_destroy(v) free((v).a)
#define kv_A(v, i) ((v).a[(i)])
#define kv_pop(v) ((v).a[–(v).n])
#define kv_size(v) ((v).n)
#define kv_max(v) ((v).m)

#define kv_resize(type, v, s) ((v).m = (s), (v).a = (type*)realloc((v).a, sizeof(type) * (v).m))

#define kv_push(type, v, x) do { \
if ((v).n == (v).m) { \
(v).m = (v).m? (v).m<<1 : 2; \ (v).a = (type*)realloc((v).a, sizeof(type) * (v).m); \ } \ (v).a[(v).n++] = (x); \ } while (0) #define kv_pushp(type, v) (((v).n == (v).m)? \ ((v).m = ((v).m? (v).m<<1 : 2), \ (v).a = (type*)realloc((v).a, sizeof(type) * (v).m), 0) \ : 0), ((v).a + ((v).n++)) #define kv_a(type, v, i) ((v).m <= (i)? \ ((v).m = (v).n = (i) + 1, kv_roundup32((v).m), \ (v).a = (type*)realloc((v).a, sizeof(type) * (v).m), 0) \ : (v).n <= (i)? (v).n = (i) \ : 0), (v).a[(i)] #endif [/sourcecode]

Read Full Post »

Older Posts »