Handler adapters — bRRAIn Docs
What Handler adapters are, supported databases, creating custom adapters, protocol, testing, and performance.
Handler adapters
Handler adapters let you mirror data from an existing system into the bRRAIn graph without rewriting your application. They plug into the Handler and translate a source system's events into bRRAIn records with appropriate POPE entity tagging.
Supported adapters (shipped)
| Source | Adapter | Mode |
| --- | --- | --- |
| PostgreSQL | adapter-postgres | LISTEN/NOTIFY, logical replication |
| MongoDB | adapter-mongo | Change streams |
| Elasticsearch | adapter-es | Scroll-based sync |
| Kafka | adapter-kafka | Consumer group |
| RabbitMQ | adapter-rabbitmq | Queue consumer |
| S3 | adapter-s3 | Event notifications |
| Webhooks | adapter-webhook | Inbound HTTP |
Install an adapter alongside the Handler:
brrain handler install-adapter postgres
Configuration
Each adapter ships with a config schema. Example for PostgreSQL:
# handler-adapters.yaml
adapters:
- name: orders-adapter
type: postgres
dsn: "postgres://reader:***@db.internal/orders"
publication: brrain_pub
slot_name: brrain_orders_slot
workspace: ecommerce
classification: internal
mapping:
orders.id: record.id
orders.created: context.created_at
orders.total: record.amount
Creating a custom adapter
Any adapter is a Go package implementing adapter.Adapter:
package myadapter
import (
"context"
"github.com/Qosil/bRRAIn/handler/adapter"
)
type MyAdapter struct {
client *MySourceClient
}
func (a *MyAdapter) Name() string { return "my-adapter" }
func (a *MyAdapter) Start(ctx context.Context, out chan<- adapter.Event) error {
for event := range a.client.Subscribe(ctx) {
out <- adapter.Event{
Type: "order",
Payload: event.Record,
Context: map[string]string{
"source": "my-system",
"event_id": event.ID,
},
}
}
return ctx.Err()
}
func (a *MyAdapter) Stop(ctx context.Context) error { return a.client.Close() }
Register it with the Handler build:
brrain handler build --with-adapter ./path/to/myadapter
Adapter protocol
Adapters communicate with the Handler via a typed event channel:
type Event struct {
Type string // Record type
Payload any // Serializable to JSON
Context map[string]string // Provenance metadata
Tombstone bool // True for delete events
}
The Handler applies the standard pipeline (classification → Gate 1 → consolidation → Gate 2 → graph write) just as it would for direct SDK calls.
Testing adapters
Use the test harness that ships with the Handler:
brrain handler test-adapter ./myadapter --events fixtures/events.jsonl
The harness replays a canned event stream through your adapter and validates that emitted records conform to the expected shape.
Performance considerations
- Batching — The Handler coalesces adapter events into batches (default 50 events / 500 ms). Tune with
--batch-sizeand--batch-window. - Back-pressure — The adapter's event channel blocks when the Handler is saturated; your adapter naturally slows down.
- At-least-once delivery — Adapters SHOULD be idempotent. Deduplication is based on
context.event_id; the Handler skips events with duplicate IDs within a 24-hour window. - Ordering — Not guaranteed across partitions. If order matters, partition events by entity ID in your source system.
Observability
Each adapter emits metrics:
adapter.{name}.events.totaladapter.{name}.events.failedadapter.{name}.lag_secondsadapter.{name}.batch_size
Low-volume lag alerts (>30s) appear in the admin dashboard's quarantine zone.
When NOT to use an adapter
- Real-time synchronous needs — Adapters are eventually consistent by design. If you need synchronous reads of your primary data, keep the primary database and use the SDK for graph-enriched retrieval only.
- Tiny data volume — For workspaces storing <100 records per day, the SDK's direct
Storecalls are simpler than adding an adapter.