Memory-efficient JSON processing. Lazy Proxy expansion uses 70% less RAM than JSON.parse.
TerseJSON does LESS work than JSON.parse, not more. The Proxy skips full deserialization - only accessed fields allocate memory. Plus 30-80% smaller payloads.
Your CMS API returns 21 fields per article. Your list view renders 3.
// Standard JSON.parse workflow:
const articles = await fetch('/api/articles').then(r => r.json());
// Result: 1000 objects x 21 fields = 21,000 properties allocated in memory
// You use: title, slug, excerpt (3 fields)
// Wasted: 18,000 properties that need garbage collectionFull deserialization wastes memory. Every field gets allocated whether you access it or not. Binary formats (Protobuf, MessagePack) have the same problem - they require complete deserialization.
TerseJSON's Proxy wraps compressed data and translates keys on-demand:
// TerseJSON workflow:
const articles = await terseFetch('/api/articles');
// Result: Compressed payload + Proxy wrapper
// Access: article.title → translates key, returns value
// Never accessed: 18 other fields stay compressed, never allocateMemory Benchmarks (1000 records, 21 fields each):
| Fields Accessed | Normal JSON | TerseJSON Proxy | Memory Saved |
|---|---|---|---|
| 1 field | 6.35 MB | 4.40 MB | 31% |
| 3 fields (list view) | 3.07 MB | ~0 MB | ~100% |
| 6 fields (card view) | 3.07 MB | ~0 MB | ~100% |
| All 21 fields | 4.53 MB | 1.36 MB | 70% |
Run the benchmark yourself: node --expose-gc demo/memory-analysis.js
This is the most common misconception. Let's trace the actual operations:
Standard JSON.parse workflow:
- Parse 890KB string → allocate 1000 objects x 21 fields = 21,000 properties
- Access 3 fields per object
- GC eventually collects 18,000 unused properties
TerseJSON workflow:
- Parse 180KB string (smaller = faster) → allocate 1000 objects x 21 SHORT keys
- Wrap in Proxy (O(1), ~0.1ms, no allocation)
- Access 3 fields → 3,000 properties CREATED
- 18,000 properties NEVER EXIST
The math:
- Parse time: Smaller string (180KB vs 890KB) = faster
- Allocations: 3,000 vs 21,000 = 86% fewer
- GC pressure: Only 3,000 objects to collect vs 21,000
- Proxy lookup: O(1) Map access, ~0.001ms per field
Result: LESS total work, not more. The Proxy doesn't add overhead - it skips work.
npm install tersejsonimport express from 'express';
import { terse } from 'tersejson/express';
const app = express();
app.use(terse());
app.get('/api/users', (req, res) => {
// Just send data as normal - compression is automatic!
res.json(users);
});import { fetch } from 'tersejson/client';
// Use exactly like regular fetch
const users = await fetch('/api/users').then(r => r.json());
// Access properties normally - Proxy handles key translation
console.log(users[0].firstName); // Works transparently!
console.log(users[0].emailAddress); // Works transparently!┌─────────────────────────────────────────────────────────────┐
│ SERVER │
│ 1. Your Express route calls res.json(data) │
│ 2. TerseJSON middleware intercepts │
│ 3. Compresses keys: { "a": "firstName", "b": "lastName" } │
│ 4. Sends smaller payload (180KB vs 890KB) │
└─────────────────────────────────────────────────────────────┘
↓ Network (smaller, faster)
┌─────────────────────────────────────────────────────────────┐
│ CLIENT │
│ 5. JSON.parse smaller string (faster) │
│ 6. Wrap in Proxy (O(1), near-zero cost) │
│ 7. Access data.firstName → Proxy translates to data.a │
│ 8. Unused fields never materialize in memory │
└─────────────────────────────────────────────────────────────┘
- CMS list views - title + slug + excerpt from 20+ field objects
- Dashboards - large datasets, aggregate calculations on subsets
- Mobile apps - memory constrained, infinite scroll
- E-commerce - product grids (name + price + image from 30+ field objects)
- Long-running SPAs - memory accumulation over hours (support tools, dashboards)
Memory efficiency is the headline. Smaller payloads are the bonus:
| Compression Method | Reduction | Use Case |
|---|---|---|
| TerseJSON alone | 30-39% | Sites without Gzip (68% of web) |
| Gzip alone | ~75% | Large payloads (>32KB) |
| TerseJSON + Gzip | ~85% | Recommended for production |
| TerseJSON + Brotli | ~93% | Maximum compression |
Network speed impact (1000-record payload):
| Network | Normal JSON | TerseJSON + Gzip | Saved |
|---|---|---|---|
| 4G (20 Mbps) | 200ms | 30ms | 170ms |
| 3G (2 Mbps) | 2,000ms | 300ms | 1,700ms |
| Slow 3G | 10,000ms | 1,500ms | 8,500ms |
"Just use gzip" misses two points:
-
68% of websites don't have Gzip enabled (W3Techs). Proxy defaults are hostile - nginx, Traefik, Kubernetes all ship with compression off.
-
Gzip doesn't help memory. Even with perfect compression over the wire, JSON.parse still allocates every field. TerseJSON's Proxy keeps unused fields compressed in memory.
TerseJSON works at the application layer:
- No proxy config needed
- No DevOps tickets
- Stacks with gzip/brotli for maximum savings
- Plus memory benefits that gzip can't provide
| TerseJSON | Protobuf/MessagePack | |
|---|---|---|
| Wire compression | 30-80% | 80-90% |
| Memory on partial access | Only accessed fields | Full deserialization required |
| Schema required | No | Yes |
| Human-readable | Yes (JSON in DevTools) | No (binary) |
| Migration effort | 2 minutes | Days/weeks |
| Debugging | Easy | Need special tools |
Binary formats win on wire size. TerseJSON wins on memory.
If you access 3 fields from a 21-field object:
- Protobuf: All 21 fields deserialized into memory
- TerseJSON: Only 3 fields materialize
NEW: Automatic memory-efficient queries with MongoDB native driver.
import { terseMongo } from 'tersejson/mongodb';
import { MongoClient } from 'mongodb';
// Call once at app startup
await terseMongo();
// All queries automatically return Proxy-wrapped results
const client = new MongoClient(uri);
const users = await client.db('mydb').collection('users').find().toArray();
// Access properties normally - 70% less memory
console.log(users[0].firstName); // Works transparently!What gets patched:
find().toArray()- arrays of documentsfind().next()- single document iterationfor await (const doc of cursor)- async iterationfindOne()- single document queriesaggregate().toArray()- aggregation results
Options:
await terseMongo({
minArrayLength: 5, // Only compress arrays with 5+ items
skipSingleDocs: true, // Don't wrap findOne results
minKeyLength: 4, // Only compress keys with 4+ chars
});
// Restore original behavior
import { unterse } from 'tersejson/mongodb';
await unterse();Automatic memory-efficient queries with node-postgres (pg).
import { tersePg } from 'tersejson/pg';
import { Client, Pool } from 'pg';
// Call once at app startup
await tersePg();
// All queries automatically return Proxy-wrapped results
const client = new Client();
await client.connect();
const { rows } = await client.query('SELECT * FROM users');
// Access properties normally - 70% less memory
console.log(rows[0].firstName); // Works transparently!
// Works with Pool too
const pool = new Pool();
const { rows: users } = await pool.query('SELECT * FROM users');Options:
await tersePg({
minArrayLength: 5, // Only compress arrays with 5+ items
skipSingleRows: true, // Don't wrap single-row results
minKeyLength: 4, // Only compress keys with 4+ chars
});
// Restore original behavior
import { untersePg } from 'tersejson/pg';
await untersePg();Automatic memory-efficient queries with mysql2.
import { terseMysql } from 'tersejson/mysql';
import mysql from 'mysql2/promise';
// Call once at app startup
await terseMysql();
// All queries automatically return Proxy-wrapped results
const connection = await mysql.createConnection({ host: 'localhost', user: 'root' });
const [rows] = await connection.query('SELECT * FROM users');
// Access properties normally - 70% less memory
console.log(rows[0].firstName); // Works transparently!
// Works with Pool too
const pool = mysql.createPool({ host: 'localhost', user: 'root' });
const [users] = await pool.query('SELECT * FROM users');Options:
await terseMysql({
minArrayLength: 5, // Only compress arrays with 5+ items
skipSingleRows: true, // Don't wrap single-row results
minKeyLength: 4, // Only compress keys with 4+ chars
});
// Restore original behavior
import { unterseMysql } from 'tersejson/mysql';
await unterseMysql();Automatic memory-efficient queries with better-sqlite3.
import { terseSqlite } from 'tersejson/sqlite';
import Database from 'better-sqlite3';
// Call once at app startup (synchronous)
terseSqlite();
// All queries automatically return Proxy-wrapped results
const db = new Database('my.db');
const rows = db.prepare('SELECT * FROM users').all();
// Access properties normally - 70% less memory
console.log(rows[0].firstName); // Works transparently!
// Single row queries too
const user = db.prepare('SELECT * FROM users WHERE id = ?').get(1);
console.log(user.email); // Works transparently!Options:
terseSqlite({
minArrayLength: 5, // Only compress arrays with 5+ items
skipSingleRows: true, // Don't wrap get() results
minKeyLength: 4, // Only compress keys with 4+ chars
});
// Restore original behavior
import { unterseSqlite } from 'tersejson/sqlite';
unterseSqlite();Automatic memory-efficient queries with Sequelize ORM.
import { terseSequelize } from 'tersejson/sequelize';
import { Sequelize, Model, DataTypes } from 'sequelize';
// Call once at app startup
await terseSequelize();
// Define your models as normal
class User extends Model {}
User.init({ firstName: DataTypes.STRING }, { sequelize });
// All queries automatically return Proxy-wrapped results
const users = await User.findAll();
// Access properties normally - 70% less memory
console.log(users[0].firstName); // Works transparently!
// Works with all Sequelize query methods
const user = await User.findOne({ where: { id: 1 } });
const { rows, count } = await User.findAndCountAll();Options:
await terseSequelize({
minArrayLength: 5, // Only compress arrays with 5+ items
skipSingleRows: true, // Don't wrap findOne/findByPk results
usePlainObjects: true, // Convert Model instances to plain objects (default)
});
// Restore original behavior
import { unterseSequelize } from 'tersejson/sequelize';
await unterseSequelize();TerseJSON includes utilities for memory-efficient server-side data handling:
import { TerseCache, compressStream } from 'tersejson/server-memory';
// Memory-efficient caching - stores compressed, expands on access
const cache = new TerseCache();
cache.set('users', largeUserArray);
const users = cache.get('users'); // Returns Proxy-wrapped data
// Streaming compression for database cursors
const cursor = db.collection('users').find().stream();
for await (const batch of compressStream(cursor, { batchSize: 100 })) {
// Process compressed batches without loading entire result set
}
// Inter-service communication - pass compressed data without intermediate expansion
import { createTerseServiceClient } from 'tersejson/server-memory';
const serviceB = createTerseServiceClient({ baseUrl: 'http://service-b' });
const data = await serviceB.get('/api/users'); // Returns Proxy-wrappedimport { terse } from 'tersejson/express';
app.use(terse({
minArrayLength: 5, // Only compress arrays with 5+ items
minKeyLength: 4, // Only compress keys with 4+ characters
maxDepth: 5, // Max nesting depth to traverse
debug: true, // Log compression stats
}));import {
fetch, // Drop-in fetch replacement
createFetch, // Create configured fetch instance
expand, // Fully expand a terse payload
proxy, // Wrap payload with Proxy (default)
process, // Auto-detect and expand/proxy
} from 'tersejson/client';
// Drop-in fetch replacement
const data = await fetch('/api/users').then(r => r.json());
// Manual processing
import { process } from 'tersejson/client';
const response = await regularFetch('/api/users');
const data = process(await response.json());import {
compress, // Compress an array of objects
expand, // Expand a terse payload (full deserialization)
wrapWithProxy, // Wrap payload with Proxy (lazy expansion - recommended)
isTersePayload, // Check if data is a terse payload
} from 'tersejson';
// Manual compression
const compressed = compress(users, { minKeyLength: 3 });
// Two expansion strategies:
const expanded = expand(compressed); // Full expansion - all fields allocated
const proxied = wrapWithProxy(compressed); // Lazy expansion - only accessed fieldsimport axios from 'axios';
import { createAxiosInterceptors } from 'tersejson/integrations';
const { request, response } = createAxiosInterceptors();
axios.interceptors.request.use(request);
axios.interceptors.response.use(response);import useSWR from 'swr';
import { createSWRFetcher } from 'tersejson/integrations';
const fetcher = createSWRFetcher();
function UserList() {
const { data } = useSWR('/api/users', fetcher);
return <ul>{data?.map(user => <li>{user.firstName}</li>)}</ul>;
}import { useQuery } from '@tanstack/react-query';
import { createQueryFn } from 'tersejson/integrations';
const queryFn = createQueryFn();
function UserList() {
const { data } = useQuery({
queryKey: ['users'],
queryFn: () => queryFn('/api/users')
});
return <div>{data?.[0].firstName}</div>;
}// Server
import { terseGraphQL } from 'tersejson/graphql';
app.use('/graphql', terseGraphQL(graphqlHTTP({ schema })));
// Client
import { createTerseLink } from 'tersejson/graphql-client';
const client = new ApolloClient({
link: from([createTerseLink(), httpLink]),
cache: new InMemoryCache(),
});Full type definitions included:
import type { TersePayload, Tersed } from 'tersejson';
interface User {
firstName: string;
lastName: string;
}
const users: User[] = await fetch('/api/users').then(r => r.json());
users[0].firstName; // TypeScript knows this is a stringNo! The Proxy is transparent. JSON.stringify(data) outputs original key names.
Fully supported. TerseJSON recursively compresses nested objects and arrays.
Proxy mode adds <5% CPU overhead vs JSON.parse(). But with smaller payloads and fewer allocations, net total work is LESS. Memory is significantly lower.
- wrapWithProxy() (default): Best for most cases. Lazy expansion, lower memory.
- expand(): When you need a plain object (serialization to storage, passing to libraries that don't support Proxy).
Works in all modern browsers supporting Proxy (ES6):
- Chrome 49+, Firefox 18+, Safari 10+, Edge 12+
Contributions welcome! Please read our contributing guidelines.
MIT - see LICENSE
tersejson.com | Memory-efficient JSON for high-volume APIs