curl -fsSL https://raw.githubusercontent.com/SuppieRK/ccp/main/scripts/install.sh | sh
Cut the noise.
Own the filtering rules.
CCP, Command Compression Proxy, is a CLI output filtering layer for noisy terminal output before it reaches your coding agent.
Keep command behavior intact, shape terminal output with command-aware YAML rules, preserve exit codes and critical diagnostics, and fall back to native output when details matter.
Terminal output is built for humans, not agents
Tools print banners, progress bars, repeated status lines, and other boilerplate that helps a person scan a terminal but adds little value for a coding agent reading shell output as context.
Noise wastes context and hides diagnostics
Long output burns the context window, slows follow-up reasoning, and makes failures, file paths, and critical diagnostics harder to spot when the agent decides what to do next.
CCP filters CLI output before agents see it
It removes repetitive shell noise before the output reaches the agent, while keeping the lines that still change what the agent should do next during terminal output filtering.
You own the rules and the behavior
Author YAML filters close to your workflow, preserve exit codes and critical diagnostics, and fall back to native output when it matters.
Use with your coding agent
Gains on real work
CCP does not just promise smaller output. Command ccp gain shows the token impact of real
commands, so you can see where filtering helps, where it does not, and whether the output stays worth trusting.
88 cmds · 5,330,571 → 90,127 tokens (98.3% saved)
Wins : find (4.8m / 99%) · gradle (367k / 87%) · grep (6k / 1%)Drag : cd (23 cmds) · jar (21 cmds) · grep (4 cmds)Trend: ↑ +12.4 pts week over week (85.9% → 98.3%) · on a roll
1,825 cmds · 2,461,959 → 2,160,427 tokens (12.3% saved)
Wins : grep (67k / 60%) · go (54k / 90%) · git (16k / 59%)Drag : sed (765 cmds) · openspec (245 cmds)Trend: ↓ -2.1 pts week over week (14.4% → 12.3%) · slipping
Create CLI output filters for your workflow
Write command-aware YAML, replay real output, and verify exactly what changed before you promote a terminal output filter.
version: 1
filter: "yarn"
cases:
- id: "run-success"
when_arguments:
have_sequence: ["run"]
compress_output:
stdout:
lines:
skip:
- starts_with: "yarn run v"
- starts_with: "$ "
- starts_with: "Done in "
yarn run v1.22.22
$ node scripts/success.js
success-line-1
success-line-2
Done in 0.06s.
Scaffold a filter, capture a real command, verify the replay, then promote only when the behavior earns trust.
Recorded examples from benchmark fixtures
The command on the left is the recorded fixture input. The right side is the verified CCP output for that same case.
npm run success-noisy
> ccp-npm-benchmark-basic@1.0.0 success-noisy
> node --test test/success.test.js
TAP version 13
# Subtest: add computes sums
ok 1 - add computes sums
---
duration_ms: 0.366266
type: 'test'
...
# Subtest: divide computes quotient
ok 2 - divide computes quotient
---
duration_ms: 0.067165
type: 'test'
...
1..2
# tests 2
# suites 0
# pass 2
# fail 0
# cancelled 0
# skipped 0
# todo 0
# duration_ms 37.905034
# Subtest: add computes sums
ok 1 - add computes sums
# Subtest: divide computes quotient
ok 2 - divide computes quotient
1..2
# tests 2
# pass 2
# fail 0
# duration_ms 37.905034
git status
On branch git-improvements
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: testdata/benchmarks/git/status-success-porcelain/output.txt
modified: testdata/benchmarks/git/status-success-porcelain/stdout.txt
modified: testdata/benchmarks/git/status-success/output.txt
modified: testdata/benchmarks/git/status-success/stdout.txt
Untracked files:
(use "git add <file>..." to include in what will be committed)
FIND.md
GIT.md
GREP.md
LS.md
TOOL_EXPLORATION_LOG.md
testdata/benchmarks/git/status-success-porcelain/stderr.txt
testdata/benchmarks/git/status-success/stderr.txt
no changes added to commit (use "git add" and/or "git commit -a")
## git-improvements
M testdata/benchmarks/git/status-success-porcelain/output.txt
M testdata/benchmarks/git/status-success-porcelain/stdout.txt
M testdata/benchmarks/git/status-success/output.txt
M testdata/benchmarks/git/status-success/stdout.txt
?? FIND.md
?? GIT.md
?? GREP.md
?? LS.md
?? TOOL_EXPLORATION_LOG.md
?? testdata/benchmarks/git/status-success-porcelain/stderr.txt
?? testdata/benchmarks/git/status-success/stderr.txt
bun install --frozen-lockfile
bun install v1.3.11 (af24e281)
+ @nestjs/testing@10.4.8
+ @types/bun@1.1.13
+ @types/express@5.0.0
+ @types/supertest@6.0.2
+ @typescript-eslint/eslint-plugin@8.15.0
+ @typescript-eslint/parser@8.15.0
+ eslint@8.57.1
+ eslint-config-prettier@9.1.0
+ eslint-plugin-prettier@5.2.1
+ prettier@3.3.3
+ supertest@7.0.0
+ tsc-watch@6.2.1
+ typescript@5.6.3
+ @nestjs/common@10.4.8
+ @nestjs/core@10.4.8
+ @nestjs/mapped-types@2.0.6
+ @nestjs/platform-express@10.4.8
+ reflect-metadata@0.2.2
+ rxjs@7.8.1
278 packages installed [23.85s]
+ @nestjs/testing@10.4.8
+ @types/bun@1.1.13
+ @types/express@5.0.0
+ @types/supertest@6.0.2
+ @typescript-eslint/eslint-plugin@8.15.0
+ @typescript-eslint/parser@8.15.0
+ eslint@8.57.1
+ eslint-config-prettier@9.1.0
+ eslint-plugin-prettier@5.2.1
+ prettier@3.3.3
+ supertest@7.0.0
+ tsc-watch@6.2.1
+ typescript@5.6.3
+ @nestjs/common@10.4.8
+ @nestjs/core@10.4.8
+ @nestjs/mapped-types@2.0.6
+ @nestjs/platform-express@10.4.8
+ reflect-metadata@0.2.2
+ rxjs@7.8.1
278 packages installed [23.85s]
find ./internal -name "*.go" -type f
/internal/audit/audit.go
/internal/audit/audit_test.go
/internal/audit/suite_test.go
/internal/benchmark/run.go
/internal/benchmark/run_test.go
/internal/benchmark/suite_test.go
/internal/cli/parse.go
/internal/cli/parse_test.go
/internal/cli/suite_test.go
/internal/contracts/types.go
/internal/engine/ansi.go
/internal/engine/ansi_test.go
/internal/engine/buffer.go
/internal/engine/buffer_test.go
/internal/engine/engine.go
/internal/engine/engine_test.go
/internal/
parser.go
parser_test.go
runner.go
runner_integration_test.go
runner_test.go
suite_test.go
/internal/audit/
audit.go
audit_test.go
suite_test.go
/internal/benchmark/
run.go
run_test.go
suite_test.go
/internal/cli/
parse.go
parse_test.go
suite_test.go
eslint src/lint_fail.js
/mnt/c/.../src/lint_fail.js
1:7 error 'unused' is assigned a value but never used no-unused-vars
1:17 error Missing semicolon semi
4:15 error Missing semicolon semi
✖ 3 problems (3 errors, 0 warnings)
2 errors and 0 warnings potentially fixable with the `--fix` option.
src/lint_fail.js
1:7 error no-unused-vars 'unused' is assigned a value but never used
1:17 error semi Missing semicolon
4:15 error semi Missing semicolon
next build
▲ Next.js 15.5.12
Creating an optimized production build ...
Local search index generated...
Generated 13 documents in .contentlayer
Failed to compile.
./app/page.tsx
Error: x Unexpected token. Did you mean {'}'} or }?
6 | const sortedPosts = sortPosts(allBlogs)
7 | const posts = allCoreContent(sortedPosts)
8 | return <Main posts={posts} >
9 | }
^
x Unexpected eof
Caused by:
Syntax Error
> Build failed because of webpack errors
▲ Next.js 15.5.12
Local search index generated...
Generated 13 documents in .contentlayer
Failed to compile.
./app/page.tsx
Error: x Unexpected token. Did you mean {'}'} or }?
,-[/tmp/repo-43a/app/page.tsx:9:1]
: ^
`----
x Unexpected eof
Caused by:
Syntax Error
Import trace for requested module:
> Build failed because of webpack errors
./.venv/bin/pytest -q tests/test_app.py::test_fail
F [100%]
=================================== FAILURES ===================================
__________________________________ test_fail ___________________________________
def test_fail():
print("captured stdout call")
left = {"ok": False}
right = {"ok": True}
> assert left == right
E AssertionError: assert {'ok': False} == {'ok': True}
E Differing items:
E {'ok': False} != {'ok': True}
E Use -v to get more diff
tests/test_app.py:10: AssertionError
----------------------------- Captured stdout call -----------------------------
captured stdout call
=========================== short test summary info ============================
FAILED tests/test_app.py::test_fail - AssertionError: assert {'ok': False} ==...
1 failed in 1.42s
failure details:
- test_fail
E AssertionError: assert {'ok': False} == {'ok': True}
tests/test_app.py:10: AssertionError
----------------------------- Captured stdout call -----------------------------
captured stdout call
summary:
FAILED tests/test_app.py::test_fail - AssertionError: assert {'ok': False} ==...
go test -count=1 ./...
--- FAIL: TestGlobalHistorySource (0.00s)
gain_test.go:405: source field mismatch
expected: /tmp/lifecycle/repo-one
actual: /tmp/lifecycle/repo
gain_test.go:406: source field should include repo tail
FAIL
FAIL fixture-go-test 0.001s
--- FAIL: TestGlobalHistorySource
gain_test.go:405: source field mismatch
expected: /tmp/lifecycle/repo-one
actual: /tmp/lifecycle/repo
gain_test.go:406: source field should include repo tail
./gradlew test
WARNING: A restricted method in java.lang.System has been called
WARNING: Use --enable-native-access=ALL-UNNAMED to avoid a warning for callers in this module
WARNING: Restricted methods will be blocked in a future release unless native access is enabled
> Task :compileJava NO-SOURCE
> Task :processResources NO-SOURCE
> Task :classes UP-TO-DATE
> Task :compileTestJava UP-TO-DATE
> Task :processTestResources NO-SOURCE
> Task :testClasses UP-TO-DATE
> Task :test FAILED
FailureTest > test() FAILED
org.opentest4j.AssertionFailedError at FailureTest.java:9
2 actionable tasks: 1 executed, 1 up-to-date
1 test completed, 1 failed
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':test'.
> There were failing tests. See the report at: file:///build/reports/tests/test/index.html
* Try:
> Run with --scan to generate a Build Scan (powered by Develocity).
BUILD FAILED in 1s
> Task :test FAILED
FailureTest > test() FAILED
org.opentest4j.AssertionFailedError at FailureTest.java:9
2 actionable tasks: 1 executed, 1 up-to-date
1 test completed, 1 failed
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':test'.
> There were failing tests. See the report at: file:///build/reports/tests/test/index.html
BUILD FAILED in 1s