| Crates.io | piq |
| lib.rs | piq |
| version | 0.1.0 |
| created_at | 2026-01-09 04:38:06.111143+00 |
| updated_at | 2026-01-09 04:38:06.111143+00 |
| description | A file-based queue CLI for shell scripts with state tracking and concurrency safety |
| homepage | |
| repository | https://github.com/c22/piq |
| max_upload_size | |
| id | 2031581 |
| size | 44,158 |
A file-based queue for shell scripts. State tracking and concurrency safety without the complexity.
Shell scripts often process lists of items:
for item in $(cat urls.txt); do
process "$item"
done
This works until you need:
piq gives you both while staying dead simple.
# Poke items onto a queue
piq poke myqueue "https://example.com/page1"
piq poke myqueue "https://example.com/page2"
# Or pipe them in
cat urls.txt | xargs -I {} piq poke myqueue "{}"
# Worker script
while item=$(piq pick myqueue); do
if process_url "$item"; then
piq done myqueue "$item"
fi
done
Multiple workers can run the same script concurrently - each pick atomically claims a unique item.
A queue is just a folder with text files:
myqueue/
pending.txt # items waiting to be processed
taken.txt # items currently being worked on
done.txt # finished items
Each file contains one item per line. Items flow: pending → taken → done.
You can inspect, edit, or add to these files directly:
cat myqueue/pending.txt # see what's queued
wc -l myqueue/taken.txt # count in-progress
echo "new item" >> myqueue/pending.txt # add directly
piq poke <queue> <item> # poke an item onto the queue
piq peek <queue> [state] # peek at first item without removing
piq pick <queue> [item] # pick an item to work on (pending -> taken)
piq done <queue> <item> # mark item as done (taken -> done)
piq list <queue> [state] # show items (default: all states)
For custom workflows, use the generic transition command:
# Move a specific item between any two states
piq transition myqueue --from pending.txt --to failed.txt --item "https://..."
Short answer: For production? Absolutely not! For local shell hacking? It beats for i in $(cat file.txt), hand-rolled lock files, and trying to remember which items you already processed.
Longer answer: piq uses flock for concurrency control. When you pick an item, the queue is locked, the item is moved atomically between files, then unlocked. Two workers racing will never grab the same item.
However, piq doesn't have true transactions. If your machine crashes mid-operation, you might end up with an item in two state files (duplicate) or neither (lost). We order writes to make duplicates more likely than losses, but it's not bulletproof.
The target audience: Shell hackers who want a bit more than a bash for loop, but don't need a fault-tolerant distributed queue. If you need real guarantees, use Redis, SQS, or a proper message broker.
I made this tool for myself because I wanted something better than wrangling for loops and temp files, but simpler than reaching for SQLite every time I needed to track state in a script.
cargo install piq
MIT