There seems to be a bug in the use of avro_schema_record() within the datafile reader. I'm currently investigating this myself, and can hopefully submit a patch soon, however I am wondering if there is anything I am forgetting to do when using the Avro API?
Our program is opening and closing lots of log files, and Valgrind gives me this message:
==20151== 32,518 (48 direct, 32,470 indirect) bytes in 1 blocks are definitely lost in loss record 62 of 65
==20151== at 0x4C2BFCB: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==20151== by 0x4C2C27F: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==20151== by 0x4E3D074: avro_default_allocator (allocation.c:36)
==20151== by 0x4E60F8C: avro_schema_record (schema.c:558)
==20151== by 0x4E61A5E: avro_schema_from_json_t (schema.c:856)
==20151== by 0x4E622E9: avro_schema_from_json_root (schema.c:1083)
==20151== by 0x4E624C9: avro_schema_from_json_length (schema.c:1127)
==20151== by 0x4E3FE92: file_read_header (datafile.c:314)
==20151== by 0x4E40796: avro_file_reader_fp (datafile.c:491)
==20151== by 0x4082D3: LogReader::LogReader(_IO_FILE*, char const*) (LogReader.cpp:32)
==20151== by 0x4066A3: main (main.cpp:206)
A similar report can be reproduced when using avrocat on a single avro file.