a downloader. in c. uses libcurl and pthreads.
- parallel chunks: handles multiple connections. thread pool thing. workers just grab stuff.
- granular resume: if it dies, it saves what's left in a
.metafile. use-rto continue. - adaptive chunking: it changes chunk size based on speed. fast net means bigger chunks. simple.
- direct i/o: writes to final file with
pwrite. no.partfiles. no merge step. - range detection: checks if ranges work. if not, it uses 1 thread.
- c: it's small. doesn't use much ram.
libcurl and pthreads.
makeit makes splitget.
./splitget [opts] <url>-o <file>: output file. defaults to url name.-n <num>: threads. 4 is default.-r: resume. needs the.metafile.-q: quiet. no bars.-h: help.
./splitget -r -n 8 https://example.com/file.iso -o test.iso- work-stealing queue for the chunks.
- pwrite for the offsets.
- linked list for the holes.
mit. copyright 2026 bymehul. check the license file.