LineSpec DSL Reference
Deterministic domain-specific language for integration testing. Intercepts database and HTTP traffic at the protocol level for language-agnostic, deterministic test execution.
brew tap livecodelife/linespec && brew install linespec
go install github.com/livecodelife/linespec/cmd/linespec@v3.3.1
linespec init to interactively create your .linespec.yml configuration file. Then run linespec build to build the linespec:latest Docker image required by proxy sidecars (Homebrew does this automatically).
Table of Contents
Core Design Principles
- Deterministic parsing — no NLP, no guessing.
- Single entrypoint and single exit per spec.
- Clear separation between:
- Trigger (RECEIVE)
- External dependencies (EXPECT)
- System response (RESPOND)
- All payload shapes defined externally in YAML or JSON files.
DSL Grammar Overview
A LineSpec file MUST follow this structure:
- Exactly one
RECEIVEstatement - Zero or more
EXPECTstatements - Zero or more
EXPECT_NOTstatements - Exactly one
RESPONDstatement
Statements MUST appear in this order:
TEST <name> (optional)
VARS (optional — declare typed variables)
RECEIVE
EXPECT (0..n)
EXPECT_NOT (0..n)
RESPOND
No statements may appear after RESPOND.
File Extension
Recommended extension: .linespec
Example: create_todo_success.linespec
Test Name
Optional test name declaration:
TEST
If omitted, the filename (without extension) is used as the test name.
VARS Block
The optional VARS block declares typed variables before RECEIVE. Values are pre-generated once before any payload is loaded, so every occurrence of a variable throughout the spec resolves to the same value.
Syntax
VARS
VAR_NAME: <type> [constraint=value ...]
VAR_NAME: <type> [constraint=value ...]
...
The block must appear after TEST (if present) and before RECEIVE. Each indented line declares one variable: VAR_NAME: type followed by zero or more key=value constraint pairs.
Supported types
| Type | Default generated value | Supported constraints |
|---|---|---|
uuid | RFC 4122 v4 UUID, e.g. 550e8400-e29b-41d4-a716-446655440000 | none |
integer | Random integer between 1 and 99 999 | min=N, max=N |
string | lowercase_varname_ + 8 random hex chars | length=N, charset=<set>, pattern=<regex-like> |
enum | required: must provide values | values=a,b,c |
Constraints reference
integer
min=N— lower bound (inclusive). Default: 1.max=N— upper bound (inclusive). Default: 99999.
string
length=N— exact character count of the generated string.charset=<set>— character pool. Supported values:alphanumeric,alpha,numeric,hex,lowercase,uppercase. Default:hex.pattern=<regex-like>— generate a string matching a simplified regex. Supports character classes ([a-z],[A-Z0-9]), repetition counts ({N}), and literal text. For example,pattern=prov-[0-9]{4}-[a-f0-9]{8}generates strings likeprov-2026-dab46dda.
enum
values=a,b,c— comma-separated list of allowed values. One is chosen at random each run.
Why use VARS?
Without VARS, variable types are inferred from the name (variables ending in _UUID get a UUID; everything else gets a string). VARS lets you be explicit — it is the only way to generate an integer-typed variable that encodes as a JSON number (not a quoted string) in payload files and HTTP responses. Constraints let you control the shape of generated values so the service's validation logic is exercised realistically.
Resolution order
- If the variable is already set in the environment, that value is used
- Otherwise a random value of the declared type (and constraints) is generated and injected into the test container
Example — integer with bounds, string with charset
TEST get_user_with_vars
VARS
AUTH_TOKEN: string length=32 charset=alphanumeric
USER_ID: integer min=1 max=9999
RECEIVE HTTP:GET /api/v1/users/${USER_ID}
HEADERS
Authorization: Bearer ${AUTH_TOKEN}
EXPECT READ:MYSQL users
USING_SQL """
SELECT users.* FROM `users` WHERE `users`.`token` = '${AUTH_TOKEN}' LIMIT 1
"""
RETURNS {{payloads/user_response.json}}
EXPECT READ:MYSQL users
USING_SQL_CONTAINS """
WHERE users.id =
"""
RETURNS {{payloads/user_response.json}}
RESPOND HTTP:200
WITH {{payloads/user_public_response.json}}
USER_ID is declared as integer min=1 max=9999, so ${USER_ID} resolves to a number such as 4271 in the URL and in any payload file that references it. The mock registry receives it as a JSON number, so the service's response body encodes user_id as 4271, not "4271".
Example — string with pattern
VARS
AUTH_TOKEN: string pattern=prov-[0-9]{4}-[a-f0-9]{8}
Generates values like prov-2026-dab46dda — useful when the service expects a token in a specific structured format and you want the test to exercise that validation path.
Example — enum
VARS
ORDER_STATUS: enum values=pending,active,cancelled
Picks one of the three values at random each run.
RECEIVE Statement
Defines the trigger request into the System Under Test (SUT).
Syntax
RECEIVE HTTP:
[WITH {{}}]
[HEADERS
:
...]
Example
RECEIVE HTTP:POST /api/v1/todos
WITH {{todo.yaml}}
RECEIVE HTTP:GET /api/v1/users/42
HEADERS
Authorization: Bearer token_abc123xyz
Rules
- Exactly one RECEIVE per file
- MUST appear before any EXPECT or EXPECT_NOT
- HTTP method is required
- URL is required (full URL including protocol and host)
- WITH is optional for HTTP requests without a body
- Body must reference an external YAML or JSON file
- HEADERS is optional and supports multiple header lines with indentation
- WITH must come before HEADERS if both are present
EXPECT Statement
Defines an external dependency interaction that MUST occur during execution.
General Syntax
EXPECT
[USING_SQL """
"""]
[USING_SQL_CONTAINS """
"""]
[WITH {{}}]
[RETURNS {{}}]
[RETURNS EMPTY]
[VERIFY query CONTAINS '']
[VERIFY query NOT_CONTAINS '']
[VERIFY query MATCHES //]
SQL Matching: Semantic (Recommended) vs Legacy Text Matching
Semantic Matching (Recommended)
Semantic matching routes by which tables a query touches and lets you add optional verification constraints. It is ORM-agnostic — it does not care whether your ORM uses $1, ?, or inline literals, or whether it adds ORDER BY, LIMIT, or varies column order. For PostgreSQL, the proxy reads actual Bind message parameter values, so WHERE id = $1 and WHERE id = 42 match identically when $1 is bound to 42.
| Keyword | Purpose |
|---|---|
ACCESSING_TABLES [t1, t2, ...] | Route this mock to queries touching exactly these tables (exact set) |
VERIFY_OPERATION SELECT|INSERT|UPDATE|DELETE | Assert the DML type |
VERIFY_WHERE_COLUMNS [col1, col2, ...] | Assert that all listed columns appear in the WHERE clause |
VERIFY_WHERE (indented block) | Assert specific column values in WHERE (PRESENT = any value) |
VERIFY_WRITTEN_VALUES (indented block) | Assert column values in INSERT column list or UPDATE SET clause |
CALL N (on the EXPECT line) | Tiebreaker: consumed in ascending N order when multiple mocks match |
Specificity-wins: the mock with the most declared VERIFY_ clauses wins. CALL N breaks ties by ordering (lowest N first).
When ACCESSING_TABLES is used the table name on the EXPECT line may be omitted.
# Disambiguate two SELECTs on the same table by WHERE value
EXPECT READ:MYSQL
ACCESSING_TABLES [users]
VERIFY_WHERE
token: PRESENT
RETURNS {{user.yaml}}
EXPECT READ:MYSQL
ACCESSING_TABLES [users]
VERIFY_WHERE
id: 42
RETURNS {{user.yaml}}
# Verify written INSERT values
EXPECT WRITE:MYSQL
ACCESSING_TABLES [users]
VERIFY_OPERATION INSERT
VERIFY_WRITTEN_VALUES
email: john@example.com
name: John Doe
WITH {{user_write.yaml}}
# CALL N — three SELECTs on users with no distinguishing WHERE
EXPECT READ:MYSQL CALL 1
ACCESSING_TABLES [users]
VERIFY_OPERATION SELECT
RETURNS {{user.yaml}}
EXPECT READ:MYSQL CALL 2
ACCESSING_TABLES [users]
VERIFY_OPERATION SELECT
RETURNS {{user.yaml}}
EXPECT READ:MYSQL CALL 3
ACCESSING_TABLES [users]
VERIFY_OPERATION SELECT
RETURNS EMPTY
# JOIN: route by both tables
EXPECT READ:POSTGRESQL
ACCESSING_TABLES [orders, users]
VERIFY_OPERATION SELECT
RETURNS {{orders_with_user.yaml}}
Legacy Text Matching (Deprecated)
USING_SQL and USING_SQL_CONTAINS are still functional but deprecated in favour of semantic matching.
| Keyword | Match mode |
|---|---|
USING_SQL | Exact match after normalization (backticks stripped, whitespace collapsed, table.* → *) |
USING_SQL_CONTAINS | Substring match after normalization |
EXPECT HTTP
EXPECT HTTP:
[HEADERS
:
...]
RETURNS {{}}
Simulating dependency failures:
# Simulate a network/connection failure (TCP close, no response)
EXPECT HTTP:GET http://user-service.local/users/42
RETURNS ERROR
# Return a non-200 HTTP status from the dependency
EXPECT HTTP:POST http://payment-service.local/charge
RETURNS HTTP:429
# Non-200 status with a response body
EXPECT HTTP:GET http://auth-service.local/validate
RETURNS HTTP:401
WITH {{payloads/auth_error.json}}
Rules:
- RETURNS is required for HTTP expectations; use
{{file}},ERROR,ERROR <code>, orHTTP:NNN RETURNS ERRORcloses the TCP connection immediately — the service sees anio.EOFRETURNS HTTP:NNNsends the given status code; combine withWITH {{file}}for a response body- HEADERS is optional; headers are matched against the actual request
- The proxy intercepts calls to the hostname and returns the mocked response
- Tests fail if the HTTP mock is defined but not invoked
EXPECT READ:MYSQL
EXPECT READ:MYSQL [<table_name>] [CALL N]
[ACCESSING_TABLES [<table1>, <table2>, ...]]
[VERIFY_OPERATION SELECT]
[VERIFY_WHERE_COLUMNS [<col1>, <col2>, ...]]
[VERIFY_WHERE
<col>: <value>
...]
RETURNS {{<response_file>}}
EXPECT READ:MYSQL <table_name>
[USING_SQL """
<SQL SELECT statement>
"""]
[USING_SQL_CONTAINS """
<sql-fragment>
"""]
RETURNS {{<response_file>}}
Rules:
RETURNSis required (either a file orEMPTY)- When
ACCESSING_TABLESis used the table name on the EXPECT line may be omitted ACCESSING_TABLESrequires an exact match on the full set of referenced tables- RETURNS EMPTY generates a proper MySQL zero-row response
EXPECT WRITE:MYSQL
EXPECT WRITE:MYSQL [<table_name>] [CALL N]
[ACCESSING_TABLES [<table1>, <table2>, ...]]
[VERIFY_OPERATION INSERT|UPDATE|DELETE]
[VERIFY_WRITTEN_VALUES
<col>: <value>
...]
[WITH {{<input_payload>}}]
[RETURNS {{<write_result_file>}}]
[NO TRANSACTION]
[VERIFY query CONTAINS '<string>']
[VERIFY query NOT_CONTAINS '<string>']
[VERIFY query MATCHES /<regex>/]
EXPECT WRITE:MYSQL <table_name>
[USING_SQL """
<SQL INSERT/UPDATE/DELETE statement>
"""]
[USING_SQL_CONTAINS """
<sql-fragment>
"""]
[WITH {{<input_payload>}}]
[RETURNS {{<write_result_file>}}]
[NO TRANSACTION]
[VERIFY query CONTAINS '<string>']
[VERIFY query NOT_CONTAINS '<string>']
[VERIFY query MATCHES /<regex>/]
The optional RETURNS payload controls what the MySQL driver sees in the OK packet response. This is essential when one write's result (e.g. last_insert_id from an INSERT) is used in a subsequent query.
affected_rows: 1
last_insert_id: 42
Multiple WRITE:MYSQL expectations on the same table can be disambiguated with VERIFY_OPERATION or CALL N. For legacy mocks, they are consumed in declaration order:
EXPECT WRITE:MYSQL orders
WITH {{order_insert.yaml}}
RETURNS {{order_insert_result.yaml}}
EXPECT WRITE:MYSQL orders
WITH {{order_status_update.yaml}}
RETURNS {{order_update_result.yaml}}
Rules:
WITHis optionalRETURNSis optional. Omitting it defaults toaffected_rows=0, last_insert_id=0NO TRANSACTIONis parsed but has no effect (transactions pass through)VERIFYclauses validate the actual SQL executed at runtime
EXPECT WRITE:POSTGRESQL
EXPECT WRITE:POSTGRESQL [<table_name>] [CALL N]
[ACCESSING_TABLES [<table1>, <table2>, ...]]
[VERIFY_OPERATION INSERT|UPDATE|DELETE]
[VERIFY_WRITTEN_VALUES
<col>: <value>
...]
[WITH {{<input_payload>}}]
[RETURNS {{<write_result_file>}}]
[VERIFY query CONTAINS '<string>']
[VERIFY query NOT_CONTAINS '<string>']
[VERIFY query MATCHES /<regex>/]
EXPECT WRITE:POSTGRESQL <table_name>
[USING_SQL """
<SQL INSERT/UPDATE/DELETE statement>
"""]
[USING_SQL_CONTAINS """
<sql-fragment>
"""]
[WITH {{<input_payload>}}]
[RETURNS {{<write_result_file>}}]
[VERIFY query CONTAINS '<string>']
[VERIFY query NOT_CONTAINS '<string>']
[VERIFY query MATCHES /<regex>/]
The optional RETURNS payload controls the affected_rows count in the PostgreSQL CommandComplete tag (e.g. "UPDATE 3"). Omitting it defaults to affected_rows=1.
affected_rows: 3
Semantic matching rules:
ACCESSING_TABLESroutes the mock by the exact set of tables the query touches — when used, the table name on theEXPECTline may be omittedVERIFY_OPERATIONasserts the DML type (INSERT, UPDATE, or DELETE)VERIFY_WRITTEN_VALUESasserts column values in the INSERT column list or UPDATE SET clause; the proxy reads actual Bind message parameter values, so it works across any ORMCALL Nis a tiebreaker when multiple mocks match — consumed in ascending N order- Specificity-wins: the mock with the most declared
VERIFY_clauses wins;CALL Nbreaks ties
Note: SQL RETURNING clauses (PostgreSQL's row-returning syntax) are handled separately — the proxy returns a full result set for those queries, and the RETURNS payload is not used.
Redis Expectations
LineSpec intercepts Redis traffic at the RESP2 protocol level. Use READ:REDIS for commands that fetch data and WRITE:REDIS for commands that mutate state.
EXPECT READ:REDIS
EXPECT READ:REDIS <COMMAND> <key>
RETURNS {{<response_file>}}
# Or for a cache miss:
EXPECT READ:REDIS <COMMAND> <key>
RETURNS EMPTY
Supported read commands: GET, MGET, HGET, HGETALL, HMGET, LRANGE, LLEN, SMEMBERS, SISMEMBER, ZRANGE, ZRANGEBYSCORE, EXISTS, TTL, TYPE, KEYS, STRLEN, LINDEX
Rules:
RETURNSis required (either a file orEMPTY)RETURNS EMPTYencodes as a Redis nil bulk string ($-1\r\n) — the correct response for a missing key- Protocol commands (
PING,AUTH,SELECT,HELLO,COMMAND) are handled transparently without registry lookups
EXPECT WRITE:REDIS
EXPECT WRITE:REDIS <COMMAND> <key>
[WITH {{<input_payload>}}]
[VERIFY command CONTAINS '<string>']
[VERIFY key CONTAINS '<string>']
[VERIFY value CONTAINS '<string>']
Rules:
WITHis optional; write commands without a payload return+OKVERIFYclauses can validate thecommand,key, and/orvalueindependently- Unmatched write commands pass through and return
+OK
Example — Cache Hit / Miss
TEST list_notifications_cache_hit
RECEIVE HTTP:GET /api/v1/notifications
HEADERS
Authorization: Bearer ${AUTH_TOKEN}
# Redis returns the cached user — skip the downstream HTTP call
EXPECT READ:REDIS GET auth:cache:${AUTH_TOKEN}
RETURNS {{payloads/cached_user.json}}
EXPECT READ:POSTGRESQL notifications
USING_SQL """
SELECT id, content, recipient, created_at FROM notifications
WHERE recipient = $1::VARCHAR ORDER BY created_at DESC
"""
RETURNS {{payloads/notifications_list.yaml}}
EXPECT NOT HTTP:GET ${USER_SERVICE_URL}
RESPOND HTTP:200
WITH {{payloads/notifications_list_response.yaml}}
Example — Write with VERIFY
TEST delete-user-clears-cache
RECEIVE HTTP:DELETE /api/v1/users/123
EXPECT WRITE:REDIS DEL user:123
VERIFY command CONTAINS 'DEL'
VERIFY key MATCHES /^user:\d+$/
RESPOND HTTP:204
Enable Redis interception in .linespec.yml:
infrastructure:
redis: true
service:
environment:
REDIS_URL: redis://redis-proxy:6379
MongoDB Expectations
LineSpec intercepts MongoDB traffic at the wire protocol level (OP_MSG). No changes are required to the service under test — point its MONGODB_URI at the proxy host.
EXPECT READ:MONGODB
EXPECT READ:MONGODB <collection>
RETURNS {{<response_file>}}
# Or for empty results:
EXPECT READ:MONGODB <collection>
RETURNS EMPTY
Rules:
RETURNSis required (either a file orEMPTY)- Payload files may contain a single JSON object or a
{"rows": [...]}array for multiple documents - JSON
"id"fields containing a 24-character hex string are automatically mapped to_id: ObjectID - Unmatched queries are forwarded transparently to the real upstream MongoDB container
EXPECT WRITE:MONGODB
EXPECT WRITE:MONGODB <collection>
[WITH {{<input_payload>}}]
Rules:
WITHis optional; all matched write operations return{n: 1, ok: 1}- The interceptor matches by collection name and command type (insert, update, delete, etc.)
- Unmatched write commands are forwarded to the real upstream MongoDB
Example
TEST get_product_success
RECEIVE HTTP:GET /products/507f1f77bcf86cd799439011
EXPECT READ:MONGODB products
RETURNS {{payloads/product_single.json}}
RESPOND HTTP:200
WITH {{payloads/product_single.json}}
TEST create_product_success
RECEIVE HTTP:POST /products
WITH {{payloads/create_product_request.json}}
EXPECT WRITE:MONGODB products
WITH {{payloads/create_product_request.json}}
RESPOND HTTP:201
WITH {{payloads/create_product_response.json}}
NOISE
body.id
body.created_at
Configure MongoDB in .linespec.yml:
database:
type: mongodb
image: mongo:7
port: 27017
container: db
init_script: init.js
database: catalog_service
username: root
password: example
infrastructure:
database: true
service:
environment:
MONGODB_URI: mongodb://root:example@db:27017/catalog_service?authSource=admin
VERIFY Clause
The VERIFY clause validates the actual query or command intercepted at runtime. It can be attached to MySQL, PostgreSQL, and Redis EXPECT statements.
Use cases include:
- Security: Ensuring passwords are hashed before storage
- Compliance: Verifying sensitive data is not logged in plain text
- Correctness: Confirming proper SQL structure or Redis key naming conventions
- Injection prevention: Validating query patterns match expected templates
Targets by channel
| Channel | Valid VERIFY targets |
|---|---|
| MySQL / PostgreSQL | query |
| Redis | command, key, value |
Operators
CONTAINS— Value must include the specified string (substring match)NOT_CONTAINS— Value must NOT include the specified stringMATCHES— Value must match the specified regex pattern (full Go regexp support)
Best Practices
Use MATCHES with word boundaries (\b) for precise column name matching:
# GOOD: Uses word boundaries to match exact column name
VERIFY query MATCHES /\bpassword_digest\b/
# BAD: Would also match 'password_digest' in 'old_password_digest_column'
VERIFY query CONTAINS 'password_digest'
Example — Password Hashing (SQL)
TEST create-user-with-hashing
RECEIVE HTTP:POST /api/v1/users
WITH {{user_create_request.yaml}}
# Ensure password is hashed before storage
EXPECT WRITE:MYSQL users
WITH {{user_with_hashed_password.yaml}}
VERIFY query MATCHES /\bpassword_digest\b/
VERIFY query NOT_CONTAINS '`password`'
RESPOND HTTP:201
Example — Redis Key Convention
TEST delete-user-clears-cache
RECEIVE HTTP:DELETE /api/v1/users/123
EXPECT WRITE:REDIS DEL user:123
VERIFY command CONTAINS 'DEL'
VERIFY key MATCHES /^user:\d+$/
RESPOND HTTP:204
EXPECT_NOT Statement
Defines an external dependency interaction that must NOT occur during execution. Useful for testing query optimization and ensuring certain operations are avoided.
Syntax
EXPECT_NOT
[USING_SQL """
"""]
Example — Testing Efficient Queries
TEST efficient-user-lookup
RECEIVE HTTP:GET /api/v1/users/123
# Assert that we DON'T do a full table scan
EXPECT_NOT READ:MYSQL users
USING_SQL """
SELECT * FROM users
"""
# Should use indexed lookup instead
EXPECT READ:MYSQL users
USING_SQL """
SELECT * FROM users WHERE id = 123 LIMIT 1
"""
RETURNS {{user_response.yaml}}
RESPOND HTTP:200
WITH {{user_response.yaml}}
Rules
- Exactly one of READ_MYSQL or WRITE_MYSQL
- USING_SQL is optional; if provided, matches that specific query
- If no USING_SQL, matches any read/write on the table
- Test fails if the forbidden operation is detected
RESPOND Statement
Defines the final response of the System Under Test.
Syntax
RESPOND HTTP:
[WITH {{}}]
[NOISE
body.
body.]
Example
RESPOND HTTP:201
WITH {{saved_todo.yaml}}
NOISE
body.id
body.created_at
body.updated_at
Rules
- Exactly one RESPOND per file
- MUST be the final statement
- Status MUST be numeric (e.g., 200, 201, 400, 500)
- WITH is optional for responses without a body
- NOISE must appear after WITH if both are present
NOISE (optional)
Field paths to exclude from comparison:
NOISEmust appear afterRESPOND(and afterWITHif present)- Each indented line names one field path to exclude from comparison
- Field paths use dot notation matching the JSON response body (e.g.
body.created_at) NOISEis optional; omit it when no fields need filtering
Environment Variable Interpolation
LineSpec supports environment variable substitution using ${VAR_NAME} syntax. This feature catches hardcoded secrets and ensures your application reads configuration from the environment.
Syntax
${VAR_NAME}
Variable Name Rules:
- Must start with an uppercase letter (
A-Z) - Can contain uppercase letters, digits, and underscores (
A-Z0-9_) - Lowercase variables are treated as literal text (not interpolated)
Valid: ${API_TOKEN}, ${DB_HOST_1}, ${API_VERSION}
Invalid (treated as literal): ${api_token} (lowercase), ${123_VAR} (starts with digit), ${VAR-NAME} (hyphen not allowed)
Where It Works
Environment variables can be used in:
| Location | Example |
|---|---|
| HTTP URLs | http://api.${DOMAIN}.com/users |
| HTTP Paths | /api/${API_VERSION}/todos |
| HTTP Headers | Authorization: Bearer ${AUTH_TOKEN} |
| SQL Queries | WHERE api_key = '${API_KEY}' |
| Payload Files | JSON/YAML files loaded via WITH {{file.yaml}} |
How It Works
- Check Environment: If the variable is set in the environment, use that value
- Generate Random: If not set, generate a random value at test runtime
- Inject into Container: Generated values are automatically injected as environment variables into your test container
Random Value Format
When a variable is not defined in the environment, LineSpec generates:
{lowercase_var_name}_{16_hex_chars}
Example: api_token_a1b2c3d4e5f6g7h8
This ensures your tests never accidentally match hardcoded secrets - the application must read from environment variables to get the correct value.
Example — Catching Hardcoded Secrets
TEST authenticate-user
RECEIVE HTTP:POST /api/v1/auth
WITH {{auth_request.yaml}}
HEADERS
Authorization: Bearer ${API_TOKEN}
EXPECT HTTP:GET http://auth-service.local/validate
HEADERS
Authorization: Bearer ${API_TOKEN}
RETURNS {{auth_response.yaml}}
RESPOND HTTP:200
If your application has a hardcoded API token instead of reading from API_TOKEN, the test will fail because the generated random value won't match.
Example — Dynamic Configuration
RECEIVE HTTP:GET /api/${API_VERSION}/users
EXPECT READ:MYSQL users
USING_SQL """SELECT * FROM users WHERE env = '${DEPLOY_ENV}'"""
RETURNS {{users.yaml}}
Payload File Interpolation
Variables in payload files are also interpolated:
api_key: ${API_KEY}
user_id: 123
The actual API key value is substituted at test time when the file is loaded.
Limitations
- No default values:
${VAR:-default}syntax is not supported - Strict naming: Only uppercase with underscores
- No nested interpolation: Cannot do
${${VAR}} - First-use defines: The first resolution of a variable determines its value for the entire test
Configuration (.linespec.yml)
Every LineSpec test directory requires a .linespec.yml file. It tells the runner how to build, start, and wire up your service and its dependencies. Only the service section is required.
Service
Defines the service under test.
service:
name: my-service # Logical name used in container labels
service_dir: my-service # Directory containing the service source code
type: web # web | worker | consumer
framework: fastapi # rails | fastapi | django | express | chi | custom
port: 8000 # Port the service listens on inside the container
health_endpoint: /health # Path polled to confirm the service is ready
docker_compose: docker-compose.yml # Path relative to service_dir
build_context: . # Docker build context
# Override the framework default start command.
# Use ${PORT} to inject the configured port at runtime.
start_command: uvicorn app.main:app --host 0.0.0.0 --port 8000
migration_command: alembic upgrade head # Optional; overrides framework default
needs_warmup: true # true | false (default: per-framework)
warmup_endpoint: /health # Path to poll (overrides framework default)
warmup_delay_ms: 100 # Extra delay after health check passes (ms)
environment: # Env vars injected into the container at test time
DATABASE_URL: postgresql+asyncpg://user:pass@db:5432/mydb
REDIS_URL: redis://redis-proxy:6379
KAFKA_BROKERS: kafka:29092
Framework defaults
| Framework | Start command | Migration command | Warmup endpoint |
|---|---|---|---|
rails | bundle exec rails server -b 0.0.0.0 -p ${PORT} | bundle exec rails db:migrate | /up |
fastapi | python -m uvicorn main:app --host 0.0.0.0 --port ${PORT} | — | /health |
django | python manage.py runserver 0.0.0.0:${PORT} | python manage.py migrate | /health |
express | npm start | — | /health |
chi | PORT=${PORT} go run . | — | /health |
custom | (required) | — | / |
Database
Defines the database container. Omit if infrastructure.external_db: true.
Single-database form (backward compatible):
database:
type: postgresql # mysql | postgresql | mongodb
image: postgres:16-alpine
port: 5432
container: db # Service name in docker-compose
init_script: init.sql # SQL or JS file run on first startup to seed schema
database: mydb
username: myuser
password: mypassword
host: db.internal # External host (used when external_db: true)
proxy: true # Set to false to disable protocol-level interception
Multi-database form — use databases: when a service talks to more than one database type simultaneously. Each entry gets its own real-DB container and proxy sidecar.
databases:
- name: mysql # required; host defaults to this name
type: mysql
image: mysql:8.4
port: 3306
database: myapp_development
username: myuser
password: mypassword
proxy: true
# proxy alias → "mysql", real DB alias → "real-mysql"
- name: mongo
type: mongodb
image: mongo:7
port: 27017
database: myapp_events
username: myuser
password: mypassword
proxy: true
# proxy alias → "mongo", real DB alias → "real-mongo"
Environment variables injected when using databases::
| Type | Prefixed (all databases) | Unprefixed (first database only) |
|---|---|---|
mysql | <NAME>_DB_HOST, <NAME>_DB_PORT, <NAME>_DB_USERNAME, <NAME>_DB_PASSWORD | DB_HOST, DB_PORT, DB_USERNAME, DB_PASSWORD |
postgresql | <NAME>_DATABASE_URL | DATABASE_URL |
mongodb | <NAME>_MONGODB_URI | MONGODB_URI |
Infrastructure
Toggles which infrastructure components LineSpec manages.
infrastructure:
database: true # Start and proxy a database container
kafka: true # Start a Kafka container for EVENT/MESSAGE expectations
redis: true # Start and proxy a Redis interceptor
grpc: false # Start a gRPC proxy sidecar
external_db: false # true = don't manage the DB container
proxy_image: linespec:latest # Docker image for protocol proxy sidecars
Dependencies
External services the SUT calls. Each entry creates a proxy that intercepts matching requests. Supported types: http, grpc.
dependencies:
- name: user-service
type: http
host: user-service.local
port: 3001
proxy: true # Intercept calls to this host
host_alias: user-svc # Optional DNS alias inside the test network
headers: # Default headers forwarded to all matched requests
X-Internal-Token: secret
- name: workflow-service
type: grpc
host: temporal
port: 7233
grpc_descriptor_set: proto/workflow.pb # Optional per-dependency override
gRPC dependencies support an optional grpc_descriptor_set field for protobuf descriptor mocks (see gRPC Expectations). A service-level default can be set via the top-level grpc_descriptor_set field; per-dependency values take precedence.
Provenance
Enables git hooks and semantic search for Provenance Records.
provenance:
dir: provenance/
enforcement: warn # none | warn | strict
commit_tag_required: true # Commits must reference a provenance record ID
auto_affected_scope: true # Auto-populate affected_scope from git diffs
embedding: # Voyage AI — enables linespec provenance search
provider: voyage
index_model: voyage-4-large
query_model: voyage-4-lite
api_key: "${VOYAGE_API_KEY}"
similarity_threshold: 0.50
index_on_complete: true
Advanced options
# Container and network naming — supports Go template variables:
# {{ .ServiceName }}, {{ .SpecName }}, {{ .Type }}
container_naming:
database_container: linespec-shared-db
network_alias: real-db
kafka_container: linespec-shared-kafka
proxy_container: proxy-{{ .Type }}-{{ .SpecName }}
app_container: app-{{ .SpecName }}
migrate_container: linespec-migrate-{{ .ServiceName }}
network_name: linespec-shared-net
project_mount_path: /app/project
registry_mount_path: /app/registry
# Dynamic port allocation
ports:
dynamic_ports: true # Allocate random host ports (default: true)
min_port: 20000
max_port: 30000
fixed_proxy_port: 0 # Pin the verify sidecar to a specific port (0 = dynamic)
# Schema discovery (MySQL / PostgreSQL)
schema_discovery:
mode: auto # auto | static | none
exclude_tables:
- schema_migrations
- ar_internal_metadata
cache_file: .linespec/schema-cache.json
# Payload loading
payload:
directory: payloads # Subdirectory name for payload files (default: payloads)
status_field: status # JSON field path used to extract HTTP status
# Misc
timeout_seconds: 60 # Per-test timeout (default: 30)
strict_passthrough: false # true = fail on any unmatched proxy interaction
Minimal example
service:
name: my-service
framework: fastapi
port: 8000
database:
type: postgresql
image: postgres:16-alpine
port: 5432
container: db
database: mydb
username: myuser
password: mypassword
infrastructure:
database: true
Kafka Events
LineSpec intercepts Kafka traffic at the wire protocol level. It handles both the producer side (messages the service publishes) and the consumer side (messages the service receives).
Enable Kafka in .linespec.yml:
infrastructure:
kafka: true
service:
environment:
KAFKA_BROKERS: kafka:29092
EXPECT EVENT (Produce)
Assert that the service publishes a message to a Kafka topic.
EXPECT EVENT:<topic-name>
[WITH {{<payload_file>}}]
WITHis optional; omit it to assert only that a message was produced on the topic- When provided, the payload file is matched against the Kafka message value
- Test fails if the expected produce event is not observed
RECEIVE EVENT (Consume)
Trigger the test by injecting a Kafka message into the service's consumer path instead of an HTTP request. Use this to test Kafka consumer handlers.
RECEIVE EVENT:<topic-name>
WITH {{<message_payload>}}
- Exactly one
RECEIVEper file — use eitherRECEIVE HTTP:*orRECEIVE EVENT:*, not both WITHis required; the payload file is the message value injected into the topic
Example — Publish on create
TEST create_todo_publishes_event
RECEIVE HTTP:POST /api/v1/todos
WITH {{payloads/todo_request.yaml}}
EXPECT WRITE:MYSQL todos
WITH {{payloads/todo_insert.yaml}}
EXPECT EVENT:todo-events
WITH {{payloads/todo_created_event.yaml}}
RESPOND HTTP:201
WITH {{payloads/todo_response.yaml}}
NOISE
body.id
body.created_at
Example — Consumer handler
TEST process_todo_created_event
RECEIVE EVENT:todo-events
WITH {{payloads/todo_created_event.yaml}}
EXPECT HTTP:GET ${USER_SERVICE_URL}/api/v1/users/42
RETURNS {{payloads/user_info.yaml}}
EXPECT WRITE:POSTGRESQL notifications
WITH {{payloads/notification_insert.yaml}}
RESPOND HTTP:200
gRPC Expectations
LineSpec intercepts gRPC traffic using an HTTP/2 proxy. The service under test must point its gRPC client at the proxy host. No code changes to the service are required.
Enable gRPC in .linespec.yml:
infrastructure:
grpc: true
dependencies:
- name: user-grpc-service
type: grpc
host: user-grpc-service.local
port: 50051
EXPECT GRPC
EXPECT GRPC:<ServiceName>/<MethodName>
[WITH {{<request_payload>}}]
RETURNS {{<response_payload>}}
# or for a method that returns no body:
EXPECT GRPC:<ServiceName>/<MethodName>
[WITH {{<request_payload>}}]
RETURNS EMPTY
ServiceName/MethodNamematches the gRPC route (e.g.UserService/GetUser)WITHis optional; omit it to match any request body for that methodRETURNSis required; the proxy returns it as the gRPC responseRETURNS EMPTYsends a 5-byte Length-Prefixed Message frame with a zero-length body — the correct wire encoding for a gRPC method that returns an empty protobuf message (e.g.google.protobuf.Empty)- Test fails if the expected gRPC call is not observed
Content-Type handling
The gRPC proxy echoes the request's Content-Type in its response:
application/grpc+json(default) — payloads are JSON. The 5-byte gRPC length-prefixed frame contains a JSON body. This is the original mode and remains the default when no Content-Type is specified.application/grpc— payloads are binary protobuf. When a protobuf descriptor set is configured (see below),RETURNSpayloads written as JSON are automatically converted to binary protobuf on the wire. Without a descriptor, the raw bytes from the payload file are sent as-is.
Upstream passthrough
When a type: grpc dependency specifies a host and port, the proxy forwards any unmocked gRPC calls to that upstream backend via HTTP/2 reverse proxy. This lets you mix mocked and real gRPC backends in a single test — methods you EXPECT are intercepted; all others are forwarded transparently.
When no upstream is configured (or infrastructure.grpc: true is used without gRPC dependencies), unmocked calls return UNIMPLEMENTED — preserving backward compatibility with the original pure-mock behavior.
Protobuf descriptor mocks
When the service under test uses native gRPC clients (not JSON), the proxy needs a compiled protobuf descriptor set (.pb file) to convert JSON RETURNS payloads into binary protobuf on the wire.
Configure the descriptor set in .linespec.yml:
# Service-level default — applies to all gRPC dependencies
grpc_descriptor_set: proto/workflow.pb
dependencies:
- name: workflow-service
type: grpc
host: temporal
port: 7233
# Per-dependency override — takes precedence over the service-level default
- name: user-grpc-service
type: grpc
host: user-grpc-service.local
port: 50051
grpc_descriptor_set: proto/user.pb
The descriptor set is a FileDescriptorSet compiled with protoc:
protoc --include_imports --descriptor_set_out=workflow.pb workflow.proto
- When a descriptor is loaded and the request
Content-Typeisapplication/grpc, the proxy converts JSONRETURNSpayloads to binary protobuf using the descriptor's message definitions - When no descriptor is configured, or when the request
Content-Typeisapplication/grpc+json, payloads are served as-is (JSON or raw bytes) - The runner merges all descriptor sets (service-level + per-dependency) into a single
FileDescriptorSetbefore passing it to the proxy container
Example — JSON gRPC (application/grpc+json)
TEST get_notification_resolves_user_via_grpc
RECEIVE HTTP:GET /api/v1/notifications/42
HEADERS
Authorization: Bearer ${AUTH_TOKEN}
# Cache miss — must call downstream gRPC service for user info
EXPECT READ:REDIS GET auth:cache:${AUTH_TOKEN}
RETURNS EMPTY
EXPECT GRPC:UserService/GetUser
WITH {{payloads/get_user_request.json}}
RETURNS {{payloads/user_info.json}}
EXPECT READ:POSTGRESQL notifications
USING_SQL """
SELECT * FROM notifications WHERE id = $1
"""
RETURNS {{payloads/notification.yaml}}
RESPOND HTTP:200
WITH {{payloads/notification_response.yaml}}
Example — Binary protobuf gRPC (application/grpc)
TEST start_workflow_success
RECEIVE HTTP:POST /api/v1/workflows
WITH {{payloads/start_workflow_request.json}}
EXPECT GRPC:temporal.api.workflowservice.v1.WorkflowService/StartWorkflowExecution
WITH {{payloads/grpc_start_request.json}}
RETURNS {{payloads/grpc_start_response.json}}
RESPOND HTTP:200
WITH {{payloads/start_workflow_response.json}}
TIMEOUT Directive
By default tests time out after 180 seconds (or the value of timeout_seconds in .linespec.yml). A per-test TIMEOUT directive overrides both defaults for that specific file.
Syntax
TIMEOUT <duration>
Duration uses Go duration syntax: 30s, 2m, 90s, etc. The directive must appear after RECEIVE and before any EXPECT blocks.
Precedence
- Per-test
TIMEOUTdirective (highest) timeout_secondsin.linespec.yml- Global default of 180 seconds (lowest)
Example
TEST slow_report_generation
RECEIVE HTTP:POST /api/v1/reports
WITH {{payloads/report_request.yaml}}
TIMEOUT 5m
EXPECT WRITE:POSTGRESQL reports
WITH {{payloads/report_insert.yaml}}
RESPOND HTTP:202
WITH {{payloads/report_accepted.yaml}}
Complete Examples
Example 1: Create Todo Success
TEST create_todo_success
RECEIVE HTTP:POST /api/v1/todos
WITH {{todo.yaml}}
HEADERS
Authorization: Bearer token_abc123xyz
EXPECT HTTP:GET http://user-service.local/api/v1/users/auth
HEADERS
Authorization: Bearer token_abc123xyz
RETURNS {{user_info.yaml}}
EXPECT WRITE:MYSQL todos
WITH {{todo_insert.yaml}}
EXPECT EVENT:todo-events
WITH {{todo_created_event.yaml}}
RESPOND HTTP:201
WITH {{saved_todo.yaml}}
NOISE
body.id
body.created_at
body.updated_at
Example 2: Create User with Validation
TEST create-user-secure
RECEIVE HTTP:POST http://localhost:3000/users
WITH {{payloads/user_create_req.yaml}}
HEADERS
Authorization: Bearer token
EXPECT WRITE:MYSQL users
WITH {{payloads/user_with_password_digest.yaml}}
VERIFY query MATCHES /\bpassword_digest\b/
VERIFY query NOT_CONTAINS '`password`'
RESPOND HTTP:201
WITH {{payloads/user_create_resp.yaml}}
NOISE
body.id
body.created_at
Example 3: Notifications with Redis cache + PostgreSQL + downstream HTTP
Demonstrates Redis cache-hit path, database query, and a negative assertion ensuring the downstream user-service is skipped when the cache is warm.
TEST list_notifications_cache_hit
RECEIVE HTTP:GET /api/v1/notifications
HEADERS
Authorization: Bearer ${AUTH_TOKEN}
# Cache hit — user resolved from Redis, no downstream call needed
EXPECT READ:REDIS GET auth:cache:${AUTH_TOKEN}
RETURNS {{payloads/cached_user.json}}
EXPECT READ:POSTGRESQL notifications
USING_SQL """
SELECT notifications.id, notifications.content, notifications.recipient,
notifications.created_at, notifications.updated_at
FROM notifications
WHERE notifications.recipient = $1::VARCHAR
ORDER BY notifications.created_at DESC
"""
RETURNS {{payloads/notifications_list.yaml}}
# Assert the downstream user-service is NOT called (cache was sufficient)
EXPECT NOT HTTP:GET ${USER_SERVICE_URL}
RESPOND HTTP:200
WITH {{payloads/notifications_list_response.yaml}}
Example 4: MongoDB product catalogue
Read from a MongoDB collection and create a document, with NOISE to exclude server-generated fields.
TEST get_product_success
RECEIVE HTTP:GET /products/507f1f77bcf86cd799439011
EXPECT READ:MONGODB products
RETURNS {{payloads/product_single.json}}
RESPOND HTTP:200
WITH {{payloads/product_single.json}}
TEST create_product_success
RECEIVE HTTP:POST /products
WITH {{payloads/create_product_request.json}}
EXPECT WRITE:MONGODB products
WITH {{payloads/create_product_request.json}}
RESPOND HTTP:201
WITH {{payloads/create_product_response.json}}
NOISE
body.id
body.created_at
Example 5: Multi-database service (MySQL + MongoDB)
A service that writes to two different databases in a single request — MySQL for the relational record and MongoDB for an event log. Each database gets its own proxy sidecar; both are asserted in a single spec.
TEST create_order_success
RECEIVE HTTP:POST /orders
WITH {{payloads/create_order_request.yaml}}
EXPECT WRITE:MYSQL orders
WITH {{payloads/order_insert.yaml}}
EXPECT WRITE:MONGODB order_events
WITH {{payloads/order_event.yaml}}
RESPOND HTTP:201
WITH {{payloads/created_order.yaml}}
NOISE
body.id
body.created_at
service:
name: order-events-service
framework: custom
port: 3000
health_endpoint: /health
start_command: ./order-events-service
databases:
- name: mysql
type: mysql
image: mysql:8.4
port: 3306
database: orders_development
username: orders_user
password: orders_password
proxy: true
- name: mongo
type: mongodb
image: mongo:7
port: 27017
database: order_events
username: events_user
password: events_password
proxy: true
infrastructure:
database: true
CLI Usage
Execute a spec:
linespec build # Build linespec:latest Docker image (run once after install)
linespec test create_todo_success.linespec
linespec test /path/to/linespecs/
Philosophy
LineSpec is not a natural language tool. It is a strict behavioral specification language designed to:
- Be readable by humans
- Be trivial to parse
- Execute deterministically
- Support modern microservice testing workflows
No inference. No heuristics. No ambiguity.