To make an LMM pool ready for use, a client generally proceeds in three stages:
Here is an example initialization sequence that sets up an LMM pool for use in a Unix-like environment, using an (initially) 1MB memory pool to service allocations. It uses only one region, which covers all possible memory addresses; this allows additional free memory areas to be added to the pool later regardless of where they happen to be located.
#include <oskit/lmm.h>
lmm_t lmm;
lmm_region_t region;
int setup_lmm()
{
unsigned mem_size = 1024*1024;
char *mem = sbrk(mem_size);
if (mem == (char*)-1)
return -1;
lmm_init(&lmm);
lmm_add_region(&lmm, ®ion, (void*)0, (oskit_size_t)-1, 0, 0);
lmm_add_free(&lmm, mem, mem_size);
return 0;
}
After the LMM pool is set up properly, memory blocks can be allocated from it using any of the lmm_alloc functions described in the reference section below, and returned to the memory pool using the lmm_free function.