DynamoDB Weekly — 2026-03, Week 10

Editor’s Note

This week’s developments center on integration patterns that extend DynamoDB beyond its core key-value capabilities, alongside emerging concerns about cost visibility in tightly coupled serverless architectures. The tension between operational simplicity and economic transparency remains a practical consideration for teams running production workloads at scale.

Top Stories

Zero-ETL Integration Brings Full-Text Search to DynamoDB

AWS has documented a zero-ETL integration pattern that connects DynamoDB directly to OpenSearch Service, enabling full-text search, fuzzy matching, and complex queries without managing data pipelines. While the integration eliminates infrastructure overhead, production teams report challenges with cost attribution and observability when DynamoDB is coupled with services like Bedrock and OpenSearch, making it difficult to track feature-level expenses across the combined stack (read more).

Serverless Memory Architecture Decouples LLMs from CRUD Path

The Mnemora project demonstrates an AI agent memory architecture that uses DynamoDB for sub-10ms working memory reads while delegating semantic search to Aurora pgvector, with embeddings generated only at write time. This design removes large language model calls from the read path entirely, resulting in a monthly idle cost around $1 on AWS serverless primitives including Aurora Serverless v2, DynamoDB on-demand, and Lambda. The architecture supports multi-tenant isolation at the database layer, addressing a common requirement for production AI applications (read more).

Resource-Based Policies Enable Cross-Account Stream Processing

AWS has published guidance on using resource-based policies to allow Lambda functions in one account to consume DynamoDB Streams from another, targeting scenarios where application workloads run in isolated accounts while stream processing occurs in centralized analytics environments. This pattern addresses multi-account governance requirements in event-driven architectures without introducing cross-account IAM role complexity (read more).

Cost Instrumentation Uncovers 17× Feature-Level Variance

Call-stack instrumentation applied to a DynamoDB and Bedrock workload revealed a 17× cost difference between features within the same product, ranging from $0.042 to $0.717 per call. The analysis identified a caching bug that triggered 3× unnecessary model invocations, accounting for $2,800 in monthly waste. The findings highlight the difficulty of attributing costs at the feature level in architectures that compose multiple AWS services (read more).

Releases

Terraform has introduced the aws_dynamodb_global_secondary_index resource, which treats each global secondary index as an independent resource with its own lifecycle. This eliminates state drift when GSI or table capacity is adjusted outside Terraform, improving operational flexibility for teams managing DynamoDB infrastructure as code (read more).

Worth Reading

Serverless Memory Architecture for AI Agents — Design patterns for sub-10ms agent memory using DynamoDB and pgvector.

Spendtrace — Call-stack instrumentation tool for feature-level cost attribution in serverless workloads.

Implementing Search on Amazon DynamoDB Data Using Zero-ETL Integration — Official documentation for DynamoDB to OpenSearch integration patterns.