<?xml version="1.0" encoding="utf-8"?>
<rss xmlns:a10="http://www.w3.org/2005/Atom" version="2.0">
  <channel xmlns:media="http://search.yahoo.com/mrss/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <title>ABP.IO Stories</title>
    <link>https://abp.io/community/articles</link>
    <description>A hub for ABP Framework, .NET, and software development. Access articles, tutorials, news, and contribute to the ABP community.</description>
    <lastBuildDate>Sat, 07 Mar 2026 05:34:45 Z</lastBuildDate>
    <generator>Community - ABP.IO</generator>
    
    <a10:link rel="self" type="application/rss+xml" title="self" href="https://abp.io/community/rss?member=fahrigedik" />
    <item>
      <guid isPermaLink="true">https://abp.io/community/posts/building-a-multiagent-ai-system-with-a2a-mcp-and-adk-in-.net-iefdehyx</guid>
      <link>https://abp.io/community/posts/building-a-multiagent-ai-system-with-a2a-mcp-and-adk-in-.net-iefdehyx</link>
      <a10:author>
        <a10:name>fahrigedik</a10:name>
        <a10:uri>https://abp.io/community/members/fahrigedik</a10:uri>
      </a10:author>
      <category>ai</category>
      <category>MCP</category>
      <category>dotnet</category>
      <title>Building a Multi-Agent AI System with A2A, MCP, and ADK in .NET</title>
      <description>Developing a multi-agent AI system is no longer a futuristic dream; it’s something that can actually be achieved today by using open protocols and available frameworks. In this manner, by using MCP for access to tools, A2A for communicating between agents, and ADK for orchestration, we have actually built a Research Assistant.</description>
      <pubDate>Mon, 09 Feb 2026 13:01:37 Z</pubDate>
      <a10:updated>2026-03-07T03:49:30Z</a10:updated>
      <content:encoded><![CDATA[<h1>Building a Multi-Agent AI System with A2A, MCP, and ADK in .NET</h1>
<blockquote>
<p>How we combined three open AI protocols — Google's A2A &amp; ADK with Anthropic's MCP — to build a production-ready Multi-Agent Research Assistant using .NET 10.</p>
</blockquote>
<hr />
<h2>Introduction</h2>
<p>The AI space is constantly changing and improving. Once again, we've moved past the single LLM calls and into the future of <strong>Multi-Agent Systems</strong>, in which expert AI agents act in unison as a collaborative team.</p>
<p>But here is the problem: <strong>How do you make agents communicate with each other? How do you equip agents with tools? How do you control them?</strong></p>
<p>Three open protocols have emerged for answering these questions:</p>
<ul>
<li><strong>MCP (Model Context Protocol)</strong> by Anthropic — The &quot;USB-C for AI&quot;</li>
<li><strong>A2A (Agent-to-Agent Protocol)</strong> by Google — The &quot;phone line between agents&quot;</li>
<li><strong>ADK (Agent Development Kit)</strong> by Google — The &quot;organizational chart for agents&quot;</li>
</ul>
<p>In this article, I will briefly describe each protocol, highlight the benefits of the combination, and walk you through our own project: a <strong>Multi-Agent Research Assistant</strong> developed via ABP Framework.</p>
<hr />
<h2>The Problem: Why Single-Agent Isn't Enough</h2>
<p>Imagine you ask an AI: <em>&quot;Research the latest AI agent frameworks and give me a comprehensive analysis report.&quot;</em></p>
<p>A single LLM call would:</p>
<ul>
<li>Hallucinate search results (can't actually browse the web)</li>
<li>Produce a shallow analysis (no structured research pipeline)</li>
<li>Lose context between steps (no state management)</li>
<li>Can't save results anywhere (no tool access)</li>
</ul>
<p>What you actually need is a <strong>team of specialists</strong>:</p>
<ol>
<li>A <strong>Researcher</strong> who searches the web and gathers raw data</li>
<li>An <strong>Analyst</strong> who processes that data into a structured report</li>
<li><strong>Tools</strong> that let agents interact with the real world (web, database, filesystem)</li>
<li>An <strong>Orchestrator</strong> that coordinates everything</li>
</ol>
<p>This is exactly what we built.</p>
<h2><img src="https://raw.githubusercontent.com/abpframework/abp/dev/docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image.png" alt="&quot;single-vs-multiagent system&quot;" /></h2>
<h2>Protocol #1: MCP — Giving Agents Superpowers</h2>
<h3>What is MCP?</h3>
<p><strong>MCP (Model Context Protocol)</strong>: Anthropic's standardized protocol allows AI models to be connected to all external tools and data sources. MCP can be thought of as <strong>the USB-C of AI</strong> – one port compatible with everything.</p>
<p>Earlier, before MCP, if you wanted your LLM to do things such as search the web, query a database, and store files, you would need to write your own integration code for each capability. MCP lets you define your tools one time, and any agent that is MCP-compatible can make use of them.</p>
<p><img src="https://raw.githubusercontent.com/abpframework/abp/dev/docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-1.png" alt="&quot;mcp&quot;" /></p>
<h3>How MCP Works</h3>
<p>MCP follows a simple <strong>Client-Server architecture</strong>:</p>
<p><img src="https://raw.githubusercontent.com/abpframework/abp/dev/docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/mcp-client-server-1200x700.png" alt="mcp client server" /></p>
<p>The flow is straightforward:</p>
<ol>
<li><strong>Discovery</strong>: The agent asks &quot;What tools do you have?&quot; (<code>tools/list</code>)</li>
<li><strong>Invocation</strong>: The agent calls a specific tool (<code>tools/call</code>)</li>
<li><strong>Result</strong>: The tool returns data back to the agent</li>
</ol>
<h3>MCP in Our Project</h3>
<p>We built three MCP tool servers:</p>
<p>| MCP Tool | Purpose | Used By |
|----------|---------|---------|
| <code>web_search</code> | Searches the web via Tavily API | Researcher Agent |
| <code>fetch_url_content</code> | Fetches content from a URL | Researcher Agent |
| <code>save_research_to_file</code> | Saves reports to the filesystem | Analysis Agent |
| <code>save_research_to_database</code> | Persists results in SQL Server | Analysis Agent |
| <code>search_past_research</code> | Queries historical research | Analysis Agent |</p>
<p>The beauty of MCP is that you do not need to know how these tools are implemented inside the tool. You simply need to call them by their names as given in the description.</p>
<hr />
<h2>Protocol #2: A2A — Making Agents Talk to Each Other</h2>
<h3>What is A2A?</h3>
<p><strong>A2A (Agent to Agent)</strong>, formerly proposed by Google and now presented under the Linux Foundation, describes a protocol allowing <strong>one AI agent to discover another and trade tasks</strong>. MCP fits as helping agents acquire tools; A2A helps them acquire the ability to speak.</p>
<p>Think of it this way:</p>
<ul>
<li><strong>MCP</strong> = &quot;What can this agent <em>do</em>?&quot; (capabilities)</li>
<li><strong>A2A</strong> = &quot;How do agents <em>talk</em>?&quot; (communication)</li>
</ul>
<h3>The Agent Card: Your Agent's Business Card</h3>
<p>Every A2A-compatible agent publishes an <strong>Agent Card</strong> — a JSON document that describes who it is and what it can do. It's like a business card for AI agents:</p>
<pre><code class="language-json">{
  &quot;name&quot;: &quot;Researcher Agent&quot;,
  &quot;description&quot;: &quot;Searches the web to collect comprehensive research data&quot;,
  &quot;url&quot;: &quot;https://localhost:44331/a2a/researcher&quot;,
  &quot;version&quot;: &quot;1.0.0&quot;,
  &quot;capabilities&quot;: {
    &quot;streaming&quot;: false,
    &quot;pushNotifications&quot;: false
  },
  &quot;skills&quot;: [
    {
      &quot;id&quot;: &quot;web-research&quot;,
      &quot;name&quot;: &quot;Web Research&quot;,
      &quot;description&quot;: &quot;Searches the web on a given topic and collects raw data&quot;,
      &quot;tags&quot;: [&quot;research&quot;, &quot;web-search&quot;, &quot;data-collection&quot;]
    }
  ]
}
</code></pre>
<p>Other agents can discover this card at <code>/.well-known/agent.json</code> and immediately know:</p>
<ul>
<li>What this agent does</li>
<li>Where to reach it</li>
<li>What skills it has</li>
</ul>
<p><img src="https://raw.githubusercontent.com/abpframework/abp/dev/docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-2.png" alt="What is A2A?" /></p>
<h3>How A2A Task Exchange Works</h3>
<p>Once an agent discovers another agent, it can send tasks:</p>
<p><img src="https://raw.githubusercontent.com/abpframework/abp/dev/docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/orchestrator-researcher-seq-1200x700.png" alt="orchestrator" /></p>
<p>The key concepts:</p>
<ul>
<li><strong>Task</strong>: A unit of work sent between agents (like an email with instructions)</li>
<li><strong>Artifact</strong>: The output produced by an agent (like an attachment in the reply)</li>
<li><strong>Task State</strong>: <code>Submitted → Working → Completed/Failed</code></li>
</ul>
<h3>A2A in Our Project</h3>
<p>Agent communication in our system uses A2A:</p>
<ul>
<li>The <strong>Orchestrator</strong> finds all agents through the Agent Cards</li>
<li>It sends a research task to the <strong>Researcher Agent</strong></li>
<li>The Researcher’s output (artifacts) is used as input by <strong>Analysis Agent</strong> - The Analysis Agent creates the final structured report</li>
</ul>
<hr />
<h2>Protocol #3: ADK — Organizing Your Agent Team</h2>
<h3>What is ADK?</h3>
<p><strong>ADK (Agent Development Kit)</strong>, created by Google, provides patterns for <strong>organizing and orchestrating multiple agents</strong>. It answers the question: &quot;How do you build a team of agents that work together efficiently?&quot;</p>
<p>ADK gives you:</p>
<ul>
<li><strong>BaseAgent</strong>: A foundation every agent inherits from</li>
<li><strong>SequentialAgent</strong>: Runs agents one after another (pipeline)</li>
<li><strong>ParallelAgent</strong>: Runs agents simultaneously</li>
<li><strong>AgentContext</strong>: Shared state that flows through the pipeline</li>
<li><strong>AgentEvent</strong>: Control flow signals (escalate, transfer, state updates)</li>
</ul>
<blockquote>
<p><strong>Note</strong>: ADK's official SDK is Python-only. We ported the core patterns to .NET for our project.</p>
</blockquote>
<h3>The Pipeline Pattern</h3>
<p>The most powerful ADK pattern is the <strong>Sequential Pipeline</strong>. Think of it as an assembly line in a factory:</p>
<p><img src="https://raw.githubusercontent.com/abpframework/abp/dev/docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/agent-state-flow.png" alt="agent state flow" /></p>
<p>Each agent:</p>
<ol>
<li>Receives the shared <strong>AgentContext</strong> (with state from previous agents)</li>
<li>Does its work</li>
<li>Updates the state</li>
<li>Passes it to the next agent</li>
</ol>
<h3>AgentContext: The Shared Memory</h3>
<p><code>AgentContext</code> is like a shared whiteboard that all agents can read from and write to:</p>
<p><img src="https://raw.githubusercontent.com/abpframework/abp/dev/docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/agent-context.png" alt="agent context" /></p>
<p>This pattern eliminates the need for complex inter-agent messaging — agents simply read and write to a shared context.</p>
<h3>ADK Orchestration Patterns</h3>
<p>ADK supports multiple orchestration patterns:</p>
<p>| Pattern | Description | Use Case |
|---------|-------------|----------|
| <strong>Sequential</strong> | A → B → C | Research → Analysis pipeline |
| <strong>Parallel</strong> | A, B, C simultaneously | Multiple searches at once |
| <strong>Fan-Out/Fan-In</strong> | Split → Process → Merge | Distributed research |
| <strong>Conditional Routing</strong> | If/else agent selection | Route by query type |</p>
<hr />
<h2>How the Three Protocols Work Together</h2>
<p>Here's the key insight: <strong>MCP, A2A, and ADK are not competitors — they're complementary layers of a complete agent system.</strong></p>
<p><img src="https://raw.githubusercontent.com/abpframework/abp/dev/docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/agent-ecosystem.png" alt="agent ecosystem" /></p>
<p>Each protocol handles a different concern:</p>
<p>| Layer | Protocol | Question It Answers |
|-------|----------|-------------------|
| <strong>Top</strong> | ADK | &quot;How are agents organized?&quot; |
| <strong>Middle</strong> | A2A | &quot;How do agents communicate?&quot; |
| <strong>Bottom</strong> | MCP | &quot;What tools can agents use?&quot; |</p>
<hr />
<h2>Our Project: Multi-Agent Research Assistant</h2>
<h3>Built With</h3>
<ul>
<li><strong>.NET 10.0</strong> — Latest runtime</li>
<li><strong>ABP Framework 10.0.2</strong> — Enterprise .NET application framework</li>
<li><strong>Semantic Kernel 1.70.0</strong> — Microsoft's AI orchestration SDK</li>
<li><strong>Azure OpenAI (GPT)</strong> — LLM backbone</li>
<li><strong>Tavily Search API</strong> — Real-time web search</li>
<li><strong>SQL Server</strong> — Research persistence</li>
<li><strong>MCP SDK</strong> (<code>ModelContextProtocol</code> 0.8.0-preview.1)</li>
<li><strong>A2A SDK</strong> (<code>A2A</code> 0.3.3-preview)</li>
</ul>
<h3>How It Works (Step by Step)</h3>
<p><strong>Step 1: User Submits a Query</strong></p>
<p>For example, the user specifies a field of research in the dashboard: <em>“Compare the latest AI agent frameworks: LangChain, Semantic Kernel, and AutoGen”</em>, and then specifies execution mode as ADK-Sequential or A2A.</p>
<p><strong>Step 2: Orchestrator Activates</strong></p>
<p>The <code>ResearchOrchestrator</code> receives the query and constructs the <code>AgentContext</code>. In ADK mode, it constructs a <code>SequentialAgent</code> with two sub-agents; in A2A mode, it uses the <code>A2AServer</code> to send the tasks.</p>
<p><strong>Step 3: Researcher Agent Goes to Work</strong></p>
<p>The Researcher Agent:</p>
<ul>
<li>Receives the query from the context</li>
<li>Uses GPT to formulate optimal search queries</li>
<li>Calls the <code>web_search</code> MCP tool (powered by Tavily API)</li>
<li>Collects and synthesizes raw research data</li>
<li>Stores results in the shared <code>AgentContext</code></li>
</ul>
<p><strong>Step 4: Analysis Agent Takes Over</strong></p>
<p>The Analysis Agent:</p>
<ul>
<li>Reads the Researcher's raw data from <code>AgentContext</code></li>
<li>Uses GPT to perform deep analysis</li>
<li>Generates a structured Markdown report with sections:
<ul>
<li>Executive Summary</li>
<li>Key Findings</li>
<li>Detailed Analysis</li>
<li>Comparative Assessment</li>
<li>Conclusion and Recommendations</li>
</ul>
</li>
<li>Calls MCP tools to save the report to both filesystem and database</li>
</ul>
<p><strong>Step 5: Results Returned</strong></p>
<p>The orchestrator collects all results and returns them to the user via the REST API. The dashboard displays the research report, analysis report, agent event timeline, and raw data.</p>
<h3>Two Execution Modes</h3>
<p>Our system supports two execution modes, demonstrating both ADK and A2A approaches:</p>
<h4>Mode 1: ADK Sequential Pipeline</h4>
<p>Agents are organized as a <code>SequentialAgent</code>. State flows automatically through the pipeline via <code>AgentContext</code>. This is an in-process approach — fast and simple.</p>
<p><img src="https://raw.githubusercontent.com/abpframework/abp/dev/docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/sequential-agent-context-flow-1200x700.png" alt="sequential agent context flow" /></p>
<h4>Mode 2: A2A Protocol-Based</h4>
<p>Agents communicate via the A2A protocol. The Orchestrator sends <code>AgentTask</code> objects to each agent through the <code>A2AServer</code>. Each agent has its own <code>AgentCard</code> for discovery.</p>
<p><img src="https://raw.githubusercontent.com/abpframework/abp/dev/docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/orchestrator-a2a-routing-1200x700.png" alt="orchestrator a2a routing" /></p>
<h3>The Dashboard</h3>
<p>The UI provides a complete research experience:</p>
<ul>
<li><strong>Hero Section</strong> with system description and protocol badges</li>
<li><strong>Architecture Cards</strong> showing all four components (Researcher, Analyst, MCP Tools, Orchestrator)</li>
<li><strong>Research Form</strong> with query input and mode selection</li>
<li><strong>Live Pipeline Status</strong> tracking each stage of execution</li>
<li><strong>Tabbed Results</strong> view: Research Report, Analysis Report, Raw Data, Agent Events</li>
<li><strong>Research History</strong> table with past queries and their results</li>
</ul>
<p><img src="https://raw.githubusercontent.com/abpframework/abp/dev/docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-3.png" alt="Dashboard 1" /></p>
<p><img src="https://raw.githubusercontent.com/abpframework/abp/dev/docs/en/Community-Articles/09-02-2026-building-multiagent-system-in-dotnet/images/image-4.png" alt="Dashboard 2" /></p>
<hr />
<h2>Why ABP Framework?</h2>
<p>We chose ABP Framework as our .NET application foundation. Here's why it was a natural fit:</p>
<p>| ABP Feature | How We Used It |
|-------------|---------------|
| <strong>Auto API Controllers</strong> | <code>ResearchAppService</code> automatically becomes REST API endpoints |
| <strong>Dependency Injection</strong> | Clean registration of agents, tools, orchestrator, Semantic Kernel |
| <strong>Repository Pattern</strong> | <code>IRepository&lt;ResearchRecord&gt;</code> for database operations in MCP tools |
| <strong>Module System</strong> | All agent ecosystem config encapsulated in <code>AgentEcosystemModule</code> |
| <strong>Entity Framework Core</strong> | Research record persistence with code-first migrations |
| <strong>Built-in Auth</strong> | OpenIddict integration for securing agent endpoints |
| <strong>Health Checks</strong> | Monitoring agent ecosystem health |</p>
<p>ABP's single layer template provided us the best .NET groundwork, which had all the enterprise features without any unnecessary complexity for a focused AI project. Of course, the agent architecture (MCP, A2A, ADK) is actually framework-agnostic and can be implemented with any .NET application.</p>
<hr />
<h2>Key Takeaways</h2>
<h3>1. Protocols Are Complementary, Not Competing</h3>
<p>MCP, A2A, and ADK solve different problems. Using them together creates a complete agent system:</p>
<ul>
<li><strong>MCP</strong>: Standardize tool access</li>
<li><strong>A2A</strong>: Standardize inter-agent communication</li>
<li><strong>ADK</strong>: Standardize agent orchestration</li>
</ul>
<h3>2. Start Simple, Scale Later</h3>
<p>Our approach runs all of that in a single process, which is in-process A2A. Using A2A allowed us to design the code so that each agent can be extracted into its own microservice later on without affecting the code logic.</p>
<h3>3. Shared State &gt; Message Passing (For Simple Cases)</h3>
<p>ADK's <code>AgentContext</code> with shared state is simpler and faster than A2A message passing for in-process scenarios. Use A2A when agents need to run as separate services.</p>
<h3>4. MCP is the Real Game-Changer</h3>
<p>The ability to define tools once and have any agent use them — with automatic discovery and structured invocations — eliminates enormous amounts of boilerplate code.</p>
<h3>5. LLM Abstraction is Critical</h3>
<p>Using Semantic Kernel's <code>IChatCompletionService</code> lets you swap between Azure OpenAI, OpenAI, Ollama, or any provider without touching agent code.</p>
<hr />
<h2>What's Next?</h2>
<p>This project demonstrates the foundation of a multi-agent system. Future enhancements could include:</p>
<ul>
<li><strong>Streaming responses</strong> — Real-time updates as agents work (A2A supports this)</li>
<li><strong>More specialized agents</strong> — Code analysis, translation, fact-checking agents</li>
<li><strong>Distributed deployment</strong> — Each agent as a separate microservice with HTTP-based A2A</li>
<li><strong>Agent marketplace</strong> — Discover and integrate third-party agents via A2A Agent Cards</li>
<li><strong>Human-in-the-loop</strong> — Using A2A's <code>InputRequired</code> state for human approval steps</li>
<li><strong>RAG integration</strong> — MCP tools for vector database search</li>
</ul>
<hr />
<h2>Resources</h2>
<p>| Resource | Link |
|----------|------|
| <strong>MCP Specification</strong> | <a href="https://modelcontextprotocol.io">modelcontextprotocol.io</a> |
| <strong>A2A Specification</strong> | <a href="https://google.github.io/A2A">google.github.io/A2A</a> |
| <strong>ADK Documentation</strong> | <a href="https://google.github.io/adk-docs">google.github.io/adk-docs</a> |
| <strong>ABP Framework</strong> | <a href="https://abp.io">abp.io</a> |
| <strong>Semantic Kernel</strong> | <a href="https://github.com/microsoft/semantic-kernel">github.com/microsoft/semantic-kernel</a> |
| <strong>MCP .NET SDK</strong> | <a href="https://www.nuget.org/packages/ModelContextProtocol">NuGet: ModelContextProtocol</a> |
| <strong>A2A .NET SDK</strong> | <a href="https://www.nuget.org/packages/A2A">NuGet: A2A</a> |
| <strong>Our Source Code</strong> | <a href="https://github.com/fahrigedik/agent-ecosystem-in-abp">GitHub Repository</a> |</p>
<hr />
<h2>Conclusion</h2>
<p>Developing a multi-agent AI system is no longer a futuristic dream; it’s something that can actually be achieved today by using open protocols and available frameworks. In this manner, by using <strong>MCP</strong> for access to tools, <strong>A2A</strong> for communicating between agents, and <strong>ADK</strong> for orchestration, we have actually built a Research Assistant.</p>
<p>ABP Framework and .NET turned out to be an excellent choice, delivering us the infrastructure we needed to implement DI, repositories, auto APIs, and modules, allowing us to work completely on the AI agent architecture.</p>
<p>The era of single LLM calls is ending, and the era of agent ecosystems begins now.</p>
<hr />
]]></content:encoded>
      <media:thumbnail url="https://abp.io/api/posts/cover-picture-source/3a1f54ab-1e35-409b-acfd-5917b33178c7" />
      <media:content url="https://abp.io/api/posts/cover-picture-source/3a1f54ab-1e35-409b-acfd-5917b33178c7" medium="image" />
    </item>
    <item>
      <guid isPermaLink="true">https://abp.io/community/posts/async-chain-of-persistence-pattern-designing-for-failure-in-eventdriven-systems-wzjuy4gl</guid>
      <link>https://abp.io/community/posts/async-chain-of-persistence-pattern-designing-for-failure-in-eventdriven-systems-wzjuy4gl</link>
      <a10:author>
        <a10:name>fahrigedik</a10:name>
        <a10:uri>https://abp.io/community/members/fahrigedik</a10:uri>
      </a10:author>
      <category>workflow</category>
      <category>architectural-design</category>
      <category>design-patterns</category>
      <category>event-bus</category>
      <category>distributed-events</category>
      <title>Async Chain of Persistence Pattern: Designing for Failure in Event-Driven Systems</title>
      <description>Messages can get lost while being processed when you use asynchronous messaging or event handling. The Async Chain of Persistence Pattern makes sure that no message is ever lost from the system by making sure that the message is always stored at every step of the workflow.</description>
      <pubDate>Mon, 12 Jan 2026 17:29:32 Z</pubDate>
      <a10:updated>2026-03-06T18:59:23Z</a10:updated>
      <content:encoded><![CDATA[<h1>Async Chain of Persistence Pattern: Designing for Failure in Event-Driven Systems</h1>
<h2>Introduction</h2>
<p>Messages can get lost while being processed when you use asynchronous messaging or event handling.
The Async Chain of Persistence Pattern makes sure that no message is ever lost from the system by making sure that the message is always stored at every step of the workflow.</p>
<h2>The Fundamental Principle of Pattern</h2>
<p>The Async Chain of Persistence Pattern guarantees that no message is ever lost by ensuring that the message is always persistently stored at every step of the workflow. This is where the pattern gets its name.  A message cannot be removed from its previous location until it is confirmed to be persistently stored in the subsequent stages of the chain.</p>
<p>It is commonly used in event-driven systems and message-driven systems.</p>
<h3>Event-Driven versus Message-Driven Systems</h3>
<p>To understand the pattern, it's important to know the differences between events and messages.</p>
<h4>The Core Difference</h4>
<p><strong>Event:</strong> Says &quot;something happened&quot;, describes the past. Example: <code>OrderPlaced</code>, <code>PaymentCompleted</code></p>
<p><strong>Message:</strong> Says &quot;do this&quot;, commands for the future. Example: <code>CreateOrder</code>, <code>SendEmail</code></p>
<p>| Property | Event | Message |
|----------|-------|---------|
| <strong>Coupling</strong> | Loose - no one knows who's listening | Tighter - there's a specific receiver |
| <strong>Publishing</strong> | Pub/Sub - 0-N services listen | Point-to-Point - usually 1 service |
| <strong>Tense</strong> | Past tense, immutable | Present/future tense |
| <strong>Error Handling</strong> | If one consumer fails, others continue | If not processed, system breaks |</p>
<h3>Relationship with Async Chain of Persistence</h3>
<p><strong>In event-driven systems:</strong> Each service receives the event → persists it → publishes a new event</p>
<p><img src="https://raw.githubusercontent.com/abpframework/abp/dev/docs/en/Community-Articles/2026-01-11/event-driven-systems.png" alt="Event-Driven Systems" /></p>
<p><strong>In message-driven systems:</strong> At each step, the message is kept safe on queue + disk</p>
<p>In both systems, the goal is the same: no message should be lost!</p>
<p><img src="https://raw.githubusercontent.com/abpframework/abp/dev/docs/en/Community-Articles/2026-01-11/message-driven-systems.png" alt="Message-Driven Systems" /></p>
<hr />
<h2>When Do Messages Get Lost?</h2>
<p>There are 3 main scenarios for message loss:</p>
<h3>1. While Processing a Message (Receiving a Message)</h3>
<p>While processing a message in a reply, by default, there is an automatic acknowledgment and subsequent deletion of a message that is in a queue. But in cases where there is a fatal or unrecoverable error in the message processing operation or a service instance crashes, this message gets lost.</p>
<h3>2. Message Broker Crashes</h3>
<p>In the majority of message brokers, the default option for the message persistence state is set to nonpersisting. In other words, the message will be held in the memory of the message broker without any persistence. It has been used for the purpose of obtaining rapid responses and improved throughput. However, in the event of failure of the message broker process, the nonpersistent message will be lost permanently.</p>
<h3>3. Event Chaining</h3>
<p>In event-driven systems, an event is published as a derived message after an operation is done by a service. Message loss in two ways is possible in this scenario:</p>
<ol>
<li><strong>Risk of Asynchronous Send:</strong> The publish operation is often carried out using the asynchronous send feature. When a fatal error takes place prior to the receipt of acknowledgment of the message publish operation by the publish services, it becomes difficult to determine if the message has been successfully transmitted to the message broker.</li>
<li><strong>Transaction Coordination:</strong> In case an error is detected after the commit of the database but before the derived event is published, the derived event might be lost.</li>
</ol>
<h2>Implementing the Pattern: 4 Critical Steps</h2>
<p>Four critical steps are required to implement the Async Chain of Persistence Pattern:</p>
<h3>1. Message Persistence</h3>
<p>The initial step to ensure that there are no lost messages is to specify the messages as PERSISTENT: When the message broker receives the message, it is saved on disk.</p>
<pre><code class="language-javascript">var delivery_mode = PERSISTENT
var producer = create_producer(delivery_mode)

// All sent messages are persisted on the message broker
producer.send_message(APPLY_PAYMENT)

// Alternatively
var delivery_mode = PERSISTENT
var producer = create_producer()
producer.send_message(APPLY_PAYMENT, delivery_mode)
</code></pre>
<p>With this approach, even if the message broker crashes, all messages will still be there when it comes back up.</p>
<h3>2. Client Acknowledgement Mode</h3>
<p>Instead, client-acknowledgement mode needs to be employed. When a message is received in this mode, that message will be retained in the queue until a proper acknowledgment from the processing service has been made. Instead of &quot;auto acknowledge&quot; mode, &quot;client-acknowledgement&quot; mode</p>
<p>Client acknowledgement mode ensures that the message is not lost while processing by the service. When a fatal error is detected during message processing, the service exits without sending an acknowledgement message and thus results in a message redispatched.</p>
<p><strong>Important Note:</strong> The message has to be acknowledged while the message processing operation is completed so that a repeat message will not be processed.</p>
<h3>3. Synchronous Send</h3>
<p>The synchronous send must be preferred over the asynchronous send.</p>
<p>Although it takes longer to send via synchronous send, since it's a blocking call, it ensures that the message broker received and persisted the message on disk.</p>
<pre><code class="language-javascript">// Blocking call to publish the derived event
var ack = publish_event(PAYMENT_APPLIED)
if (ack not_successful) {
   retry_or_persist(PAYMENT_APPLIED)
}
</code></pre>
<p>With this approach, the risk of message loss during event chaining is eliminated.</p>
<h3>4. Last Participant Support</h3>
<p>This is the most complex step of the Async Chain of Persistence pattern. It determines when the message should be acknowledged.</p>
<h4>For Message-Driven Systems:</h4>
<pre><code class="language-javascript">var message = receive_message()
process_message(message)
database.commit()
message.acknowledge()
</code></pre>
<p>Recommended order: <strong>commit first, ack last.</strong> Otherwise, if the database operation fails, the message will be lost because it has already been removed from the queue.</p>
<h4>For Event-Driven Systems</h4>
<p>In event-driven systems, there are two distinct parties involved: one that acknowledges an event and another that publishes the event that’s been derived. The party that publishes the event that’s been derived should be treated as “last participant.”</p>
<pre><code class="language-javascript">var event = receive_event()
process_event(event)
database.commit()
message.acknowledge()

var ack = publish_event(PAYMENT_APPLIED)
if (ack not_successful) {
   retry_or_persist(PAYMENT_APPLIED)
}
</code></pre>
<p>In this sequence, the original event is acknowledged after it is completed. If publishing the derived event fails, it can be retried or persisted for later delivery.</p>
<hr />
<h2>Trade-offs</h2>
<h3>Advantages</h3>
<p><strong>Preventing Message Loss:</strong>
The major benefit of this pattern is that it doesn't let a message get lost while messages are under processing. This issue, being a serious concern in asynchronous systems, is ruled out due to the pattern.</p>
<h3>Disadvantages</h3>
<h4>1. Possible Duplicate Messages</h4>
<p>Enabling the client-acknowledgement mode can result in the processing of the same message multiple times. If the service instance fails after the database commit but before the message could be acknowledged, the message will be repeated and duplicates can be processed.</p>
<p><strong>Solution:</strong> In order to determine whether the arriving messages are previously processed or not, the message IDs can be logged and traced back. This also has the drawback of requiring one extra read operation per message in the system.</p>
<h4>2. Performance and Throughput</h4>
<p>It has also been noted that the time duration in which the persistent message takes to be sent from the message broker could be <strong>up to four times longer</strong> than in the nonpersistent message transmission.</p>
<p>Persisted messages also affect performance in terms of sending and reading the message. It is also common for message brokers to maintain message data in their memory for efficient reading, but there is no assurance that messages will always be resident in memory, depending on memory size, among other factors.</p>
<h4>3. Impact of Synchronous Send</h4>
<p>Because a synchronous send makes a blocking call until a confirmation is received, there is no other work that can be accomplished before a confirmation is received from the broker for a synchronous send. A persisted message makes this even more apparent.</p>
<h4>4. Overall Scalability</h4>
<p>This is because, upon receipt, the message broker has to spend more time persisting messages to disk, thus negatively affecting scalability. Persistent messages always give lower total throughput, which can limit scalability under high usage loads and high message volumes.</p>
<hr />
<h2>Conclusion</h2>
<p>The Async Chain of Persistence Pattern provides a powerful solution for preventing message loss. Although it has negative effects on performance, throughput, and scalability, these trade-offs are generally acceptable in systems where data loss is critical.</p>
<p>Before implementing the pattern, carefully analyze your system requirements:</p>
<ul>
<li><strong>How critical is message loss?</strong></li>
<li><strong>What are the performance and throughput requirements?</strong></li>
<li><strong>How should the system behave in case of duplicate processing?</strong></li>
</ul>
<p>The answers to these questions will help you determine whether the Async Chain of Persistence Pattern is suitable for your system.</p>
<h2>Sample Project</h2>
<p>To see an example project where this pattern is implemented, you can check out the repository:</p>
<p>🔗 <strong><a href="https://github.com/fahrigedik/SoftwareArchitecturePatterns">GitHub Repository</a></strong></p>
]]></content:encoded>
      <media:thumbnail url="https://abp.io/api/posts/cover-picture-source/3a1ec56e-56e7-34ec-af42-1d9a880c984f" />
      <media:content url="https://abp.io/api/posts/cover-picture-source/3a1ec56e-56e7-34ec-af42-1d9a880c984f" medium="image" />
    </item>
    <item>
      <guid isPermaLink="true">https://abp.io/community/posts/announcing-serverside-rendering-support-for-abp-framework-angular-applications-qvhbgnqn</guid>
      <link>https://abp.io/community/posts/announcing-serverside-rendering-support-for-abp-framework-angular-applications-qvhbgnqn</link>
      <a10:author>
        <a10:name>fahrigedik</a10:name>
        <a10:uri>https://abp.io/community/members/fahrigedik</a10:uri>
      </a10:author>
      <category>angular</category>
      <category>abp</category>
      <category>prerendering</category>
      <category>abp-framework</category>
      <title>Announcing Server-Side Rendering  Support for ABP Framework Angular Applications</title>
      <description>We are pleased to announce that Server-Side Rendering (SSR) has become available for ABP Framework Angular applications! This highly requested feature brings major gains in performance, SEO, and user experience to your Angular applications based on ABP Framework.</description>
      <pubDate>Thu, 20 Nov 2025 06:46:59 Z</pubDate>
      <a10:updated>2026-03-06T22:27:47Z</a10:updated>
      <content:encoded><![CDATA[<h1>Announcing Server-Side Rendering (SSR) Support for ABP Framework Angular Applications</h1>
<p>We are pleased to announce that <strong>Server-Side Rendering (SSR)</strong> has become available for ABP Framework Angular applications! This highly requested feature brings major gains in performance, SEO, and user experience to your Angular applications based on ABP Framework.</p>
<h2>What is Server-Side Rendering (SSR)?</h2>
<p>Server-Side Rendering refers to an approach which renders your Angular application on the server as opposed to the browser. The server creates the complete HTML for a page and sends it to the client, which can then show the page to the user. This poses many advantages over traditional client-side rendering.</p>
<h2>Why SSR Matters for ABP Angular Applications</h2>
<h3>Improved Performance</h3>
<ul>
<li><strong>Quicker visualization of the first contentful paint (FCP)</strong>: Because prerendered HTML is sent over from the server, users will see content quicker.</li>
<li><strong>Better perceived performance</strong>: Even on slower devices, the page will be displaying something sooner.</li>
<li><strong>Less JavaScript parsing time</strong>: For example, the initial page load will not require parsing and executing a large bundle of JavaScript.</li>
</ul>
<h3>Enhanced SEO</h3>
<ul>
<li><strong>Improved indexing by search engines</strong>: Search engine bots are able to crawl and index your content quicker.</li>
<li><strong>Improved rankings in search</strong>: The quicker the content loads and the easier it is to access, the better your SEO score.</li>
<li><strong>Preview when sharing on social channels</strong>: Rich previews with the appropriate meta tags are generated when sharing links on social platforms.</li>
</ul>
<h3>Better User Experience</h3>
<ul>
<li><strong>Support for low bandwidth</strong>: Users with slower Internet connections will have a better experience</li>
<li><strong>Progressive enhancement</strong>: Users can start accessing the content before JavaScript has loaded</li>
<li><strong>Better accessibility</strong>: Screen readers and other assistive technologies can access the content immediately</li>
</ul>
<h2>Getting Started with SSR</h2>
<h3>Adding SSR to an Existing Project</h3>
<p>You can easily add SSR support to your existing ABP Angular application using the Angular CLI with ABP schematics:</p>
<blockquote>
<p>Adds SSR configuration to your project</p>
</blockquote>
<pre><code class="language-bash">ng generate @abp/ng.schematics:ssr-add
</code></pre>
<blockquote>
<p>Short form</p>
</blockquote>
<pre><code class="language-bash">ng g @abp/ng.schematics:ssr-add
</code></pre>
<p>If you have multiple projects in your workspace, you can specify which project to add SSR to:</p>
<pre><code class="language-bash">ng g @abp/ng.schematics:ssr-add --project=my-project
</code></pre>
<p>If you want to skip the automatic installation of dependencies:</p>
<pre><code class="language-bash">ng g @abp/ng.schematics:ssr-add --skip-install
</code></pre>
<h2>What Gets Configured</h2>
<p>When you add SSR to your ABP Angular project, the schematic automatically:</p>
<ol>
<li><strong>Installs necessary dependencies</strong>: Adds <code>@angular/ssr</code> and related packages</li>
<li><strong>Creates Server Configuration</strong>: Creates <code>server.ts</code> and related files</li>
<li><strong>Updates Project Structure</strong>:
<ul>
<li>Creates <code>main.server.ts</code> to bootstrap the server</li>
<li>Adds <code>app.config.server.ts</code> for standalone apps (or <code>app.module.server.ts</code> for NgModule apps)</li>
<li>Configures server routes in <code>app.routes.server.ts</code></li>
</ul>
</li>
<li><strong>Updates Build Configuration</strong>: updates <code>angular.json</code> to include:
<ul>
<li>a <code>serve-ssr</code> target for local SSR development</li>
<li>a <code>prerender</code> target for static site generation</li>
<li>Proper output paths for browser and server bundles</li>
</ul>
</li>
</ol>
<h2>Supported Configurations</h2>
<p>The ABP SSR schematic supports both modern and legacy Angular build configurations:</p>
<h3>Application Builder (Suggested)</h3>
<ul>
<li>The new <code>@angular-devkit/build-angular:application</code> builder</li>
<li>Optimized for Angular 17+ apps</li>
<li>Enhanced performance and smaller bundle sizes</li>
</ul>
<h3>Server Builder (Legacy)</h3>
<ul>
<li>The original <code>@angular-devkit/build-angular:server</code> builder</li>
<li>Designed for legacy Angular applications</li>
<li>Compatible with legacy applications</li>
</ul>
<h2>Running Your SSR Application</h2>
<p>After adding SSR to your project, you can run your application in SSR mode:</p>
<pre><code class="language-bash"># Development mode with SSR
ng serve

# Or specifically target SSR development server
npm run serve:ssr

# Build for production
npm run build:ssr

# Preview production build
npm run serve:ssr:production
</code></pre>
<h2>Important Considerations</h2>
<h3>Browser-Only APIs</h3>
<p>Some browser APIs are not available on the server. Use platform checks to conditionally execute code:</p>
<pre><code class="language-typescript">import { isPlatformBrowser } from '@angular/common';
import { PLATFORM_ID, inject } from '@angular/core';

export class MyComponent {
  private platformId = inject(PLATFORM_ID);
  
  ngOnInit() {
    if (isPlatformBrowser(this.platformId)) {
      // Code that uses browser-only APIs
      console.log('Running in browser');
      localStorage.setItem('key', 'value');
    }
  }
}
</code></pre>
<h3>Storage APIs</h3>
<p><code>localStorage</code> and <code>sessionStorage</code> are not accessible on the server. Consider using:</p>
<ul>
<li>Cookies for server-accessible data.</li>
<li>The state transfer API for hydration.</li>
<li>ABP's built-in storage abstractions.</li>
</ul>
<h3>Third-Party Libraries</h3>
<p>Please ensure that any third-party libraries you use are compatible with SSR. These libraries can require:</p>
<ul>
<li>Dynamic imports for browser-only code.</li>
<li>Platform-specific service providers.</li>
<li>Custom Angular Universal integration.</li>
</ul>
<h2>ABP Framework Integration</h2>
<p>The SSR implementation is natively integrated with all of the ABP Framework features:</p>
<ul>
<li><strong>Authentication &amp; Authorization</strong>: The OAuth/OpenID Connect flow functions seamlessly with ABP</li>
<li><strong>Multi-tenancy</strong>: Fully supports tenant resolution and switching</li>
<li><strong>Localization</strong>: Server-side rendering respects the locale</li>
<li><strong>Permission Management</strong>: Permission checks work on both server and client</li>
<li><strong>Configuration</strong>: The ABP configuration system is SSR-ready</li>
</ul>
<h2>Performance Tips</h2>
<ol>
<li><strong>Utilize State Transfer</strong>: Send data from server to client to eliminate redundant HTTP requests</li>
<li><strong>Optimize Images</strong>: Proper image loading strategies, such as lazy loading and responsive images.</li>
<li><strong>Cache API Responses</strong>: At the server, implement proper caching strategies.</li>
<li><strong>Monitor Bundle Size</strong>: Keep your server bundle optimized</li>
<li><strong>Use Prerendering</strong>: The prerender target should be used for static content.</li>
</ol>
<h2>Conclusion</h2>
<p>Server-side rendering can be a very effective feature in improving your ABP Angular application's performance, SEO, and user experience. Our new SSR schematic will make it easier than ever to add SSR to your project.</p>
<p>Try it today and let us know what you think!</p>
<hr />
]]></content:encoded>
      <media:thumbnail url="https://abp.io/api/posts/cover-picture-source/3a1db231-02c9-6899-e41a-8591ad18e07d" />
      <media:content url="https://abp.io/api/posts/cover-picture-source/3a1db231-02c9-6899-e41a-8591ad18e07d" medium="image" />
    </item>
    <item>
      <guid isPermaLink="true">https://abp.io/community/posts/signalbased-forms-in-angular-21-why-youll-never-miss-reactive-forms-again-9qentsqs</guid>
      <link>https://abp.io/community/posts/signalbased-forms-in-angular-21-why-youll-never-miss-reactive-forms-again-9qentsqs</link>
      <a10:author>
        <a10:name>fahrigedik</a10:name>
        <a10:uri>https://abp.io/community/members/fahrigedik</a10:uri>
      </a10:author>
      <category>angular</category>
      <category>new-features</category>
      <title>Signal-Based Forms in Angular 21: Why You’ll Never Miss Reactive Forms Again</title>
      <description>Angular 21 introduces one of the most exciting developments in the modern edition of Angular: Signal-Based Forms. Built directly on the reactive foundation of Angular signals, this new experimental API provides a cleaner, more intuitive, strongly typed, and ergonomic approach for managing form state—without the heavy boilerplate of Reactive Forms.</description>
      <pubDate>Tue, 18 Nov 2025 08:32:44 Z</pubDate>
      <a10:updated>2026-03-07T03:26:06Z</a10:updated>
      <content:encoded><![CDATA[<h1>Signal-Based Forms in Angular 21: Why You’ll Never Miss Reactive Forms Again</h1>
<p>Angular 21 introduces one of the most exciting developments in the modern edition of Angular: <strong>Signal-Based Forms</strong>. Built directly on the reactive foundation of Angular signals, this new experimental API provides a cleaner, more intuitive, strongly typed, and ergonomic approach for managing form state—without the heavy boilerplate of Reactive Forms.</p>
<blockquote>
<p>⚠️ <strong>Important:</strong> Signal Forms are <em>experimental</em>.
Their API can change. Avoid using them in critical production scenarios unless you understand the risks.</p>
</blockquote>
<h2>Despite this, Signal Forms clearly represent Angular’s future direction.</h2>
<h2>Why Signal Forms?</h2>
<p>Traditionally in Angular, building forms has involved several concerns:</p>
<ul>
<li>Tracking values</li>
<li>Managing UI interaction states (touched, dirty)</li>
<li>Handling validation</li>
<li>Keeping UI and model in sync</li>
</ul>
<p>Reactive Forms solved many challenges but introduced their own:</p>
<ul>
<li>Verbosity FormBuilder API</li>
<li>Required subscriptions (valueChanges)</li>
<li>Manual cleaning</li>
<li>Difficult nested forms</li>
<li>Weak type-safety</li>
</ul>
<p><strong>Signal Forms solve these problems through:</strong></p>
<p>1.&quot; Automatic synchronization<br />
2.&quot; Full type safety<br />
3.&quot; Schema-based validation<br />
4.&quot; Fine-grained reactivity<br />
5.&quot; Drastically reduced boilerplate<br />
6.&quot; Natural integration with Angular Signals</p>
<hr />
<h3>1. Form Models — The Core of Signal Forms</h3>
<p>A <strong>form model</strong> is simply a writable signal holding the structure of your form data.</p>
<pre><code class="language-ts">import { Component, signal } from '@angular/core';
import { form, Field } from '@angular/forms/signals';

@Component({
  selector: 'app-login',
  imports: [Field],
  template: `
    &lt;input type=&quot;email&quot; [field]=&quot;loginForm.email&quot; /&gt;
    &lt;input type=&quot;password&quot; [field]=&quot;loginForm.password&quot; /&gt;
  `,
})
export class LoginComponent {
  loginModel = signal({
    email: '',
    password: '',
  });

  loginForm = form(this.loginModel);
}
</code></pre>
<p>Calling <code>form(model)</code> creates a <strong>Field Tree</strong> that maps directly to your model.</p>
<hr />
<h3>2. Achieving Full Type Safety</h3>
<p>Although TypeScript can infer types from object literals, defining explicit interfaces provides maximum safety and better IDE support.</p>
<pre><code class="language-ts">interface LoginData {
  email: string;
  password: string;
}

loginModel = signal&lt;LoginData&gt;({
  email: '',
  password: '',
});

loginForm = form(loginModel);
</code></pre>
<p>Now:</p>
<ul>
<li><code>loginForm.email</code> → <code>FieldTree&lt;string&gt;</code></li>
<li>Accessing invalid fields like <code>loginForm.username</code> results in compile-time errors</li>
</ul>
<p>This level of type safety surpasses Reactive Forms.</p>
<hr />
<h3>3. Reading Form Values</h3>
<h4>Read from the model (entire form):</h4>
<pre><code class="language-ts">onSubmit() {
  const data = this.loginModel();
  console.log(data.email, data.password);
}
</code></pre>
<h4>Read from an individual field:</h4>
<pre><code class="language-html">&lt;p&gt;Current email: {{ loginForm.email().value() }}&lt;/p&gt;
</code></pre>
<p>Each field exposes:</p>
<ul>
<li><code>value()</code></li>
<li><code>valid()</code></li>
<li><code>errors()</code></li>
<li><code>dirty()</code></li>
<li><code>touched()</code></li>
</ul>
<p>All as signals.</p>
<hr />
<h3>4. Updating Form Models Programmatically</h3>
<p>Signal Forms allow three update methods.</p>
<h4>1. Replace the entire model</h4>
<pre><code class="language-ts">this.userModel.set({
  name: 'Alice',
  email: 'alice@example.com',
});
</code></pre>
<h4>2. Patch specific fields</h4>
<pre><code class="language-ts">this.userModel.update(prev =&gt; ({
  ...prev,
  email: newEmail,
}));
</code></pre>
<h4>3. Update a single field</h4>
<pre><code class="language-ts">this.userForm.email().value.set('');
</code></pre>
<p>This eliminates the need for:</p>
<ul>
<li><code>patchValue()</code></li>
<li><code>setValue()</code></li>
<li><code>formGroup.get('field')</code></li>
</ul>
<hr />
<h3>5. Automatic Two-Way Binding With <code>[field]</code></h3>
<p>The <code>[field]</code> directive enables perfect two-way data binding:</p>
<pre><code class="language-html">&lt;input [field]=&quot;userForm.name&quot; /&gt;
</code></pre>
<h4>How it works:</h4>
<ul>
<li><strong>User input → Field state → Model</strong></li>
<li><strong>Model updates → Field state → Input UI</strong></li>
</ul>
<p>No subscriptions.<br />
No event handlers.<br />
No boilerplate.</p>
<p>Reactive Forms could never achieve this cleanly.</p>
<hr />
<h3>6. Nested Models and Arrays</h3>
<p>Models can contain nested object structures:</p>
<pre><code class="language-ts">userModel = signal({
  name: '',
  address: {
    street: '',
    city: '',
  },
});
</code></pre>
<p>Access fields easily:</p>
<pre><code class="language-html">&lt;input [field]=&quot;userForm.address.street&quot; /&gt;
</code></pre>
<p>Arrays are also supported:</p>
<pre><code class="language-ts">orderModel = signal({
  items: [
    { product: '', quantity: 1, price: 0 }
  ]
});
</code></pre>
<p>Field state persists even when array items move, thanks to identity tracking.</p>
<hr />
<h3>7. Schema-Based Validation</h3>
<p>Validation is clean and centralized:</p>
<pre><code class="language-ts">import { required, email } from '@angular/forms/signals';

const model = signal({ email: '' });

const formRef = form(model, {
  email: [required(), email()],
});
</code></pre>
<p>Field validation state is reactive:</p>
<pre><code class="language-ts">formRef.email().valid()
formRef.email().errors()
formRef.email().touched()
</code></pre>
<p>Validation no longer scatters across components.</p>
<hr />
<h3>8. When Should You Use Signal Forms?</h3>
<h4>New Angular 21+ apps</h4>
<p>Signal-first architecture is the new standard.</p>
<h4>Teams wanting stronger type safety</h4>
<p>Every field is exactly typed.</p>
<h4>Devs tired of Reactive Form boilerplate</h4>
<p>Signal Forms drastically simplify code.</p>
<h4>Complex UI with computed reactive form state</h4>
<p>Signals integrate perfectly.</p>
<h4>❌ Avoid if:</h4>
<ul>
<li>You need long-term stability</li>
<li>You rely on mature Reactive Forms features</li>
<li>Your app must avoid experimental APIs</li>
</ul>
<hr />
<h3>9. Reactive Forms vs Signal Forms</h3>
<p>| Feature | Reactive Forms | Signal Forms |
|--------|----------------|--------------|
| Boilerplate | High | Very low |
| Type-safety | Weak | Strong |
| Two-way binding | Manual | Automatic |
| Validation | Scattered | Centralized schema |
| Nested forms | Verbose | Natural |
| Subscriptions | Required | None |
| Change detection | Zone-heavy | Fine-grained |</p>
<p>Signal Forms feel like the &quot;modern Angular mode,&quot; while Reactive Forms increasingly feel legacy.</p>
<hr />
<h3>10. Full Example: Login Form</h3>
<pre><code class="language-ts">@Component({
  selector: 'app-login',
  imports: [Field],
  template: `
    &lt;form (ngSubmit)=&quot;submit()&quot;&gt;
      &lt;input type=&quot;email&quot; [field]=&quot;form.email&quot; /&gt;
      &lt;input type=&quot;password&quot; [field]=&quot;form.password&quot; /&gt;
      &lt;button&gt;Login&lt;/button&gt;
    &lt;/form&gt;
  `,
})
export class LoginComponent {
  model = signal({ email: '', password: '' });
  form = form(this.model);

  submit() {
    console.log(this.model());
  }
}
</code></pre>
<p>Minimal. Reactive. Completely type-safe.</p>
<hr />
<h2><strong>Conclusion</strong></h2>
<p>Signal Forms in Angular 21 represent a big step forward:</p>
<ul>
<li>Cleaner API</li>
<li>Stronger type safety</li>
<li>Automatic two-way binding</li>
<li>Centralized validation</li>
<li>Fine-grained reactivity</li>
<li>Dramatically better developer experience</li>
</ul>
<p>Although these are experimental, they clearly show the future of Angular's form ecosystem.
Once you get into using Signal Forms, you may never want to use Reactive Forms again.</p>
<hr />
]]></content:encoded>
      <media:thumbnail url="https://abp.io/api/posts/cover-picture-source/3a1da845-1e25-15af-d689-5d4a3a5c1ea6" />
      <media:content url="https://abp.io/api/posts/cover-picture-source/3a1da845-1e25-15af-d689-5d4a3a5c1ea6" medium="image" />
    </item>
    <item>
      <guid isPermaLink="true">https://abp.io/community/posts/why-do-you-need-distributed-locking-in-asp.net-core-fx1895hh</guid>
      <link>https://abp.io/community/posts/why-do-you-need-distributed-locking-in-asp.net-core-fx1895hh</link>
      <a10:author>
        <a10:name>fahrigedik</a10:name>
        <a10:uri>https://abp.io/community/members/fahrigedik</a10:uri>
      </a10:author>
      <category>asp.net-core</category>
      <category>abp-framework</category>
      <category>concurrency</category>
      <category>redis</category>
      <category>distributed-events</category>
      <title>Why Do You Need Distributed Locking in ASP.NET Core</title>
      <description>This article explores why distributed locking is essential for scalable ASP.NET Core applications and how to implement it effectively. When applications scale horizontally across multiple servers or containers, standard C# locking mechanisms (lock, SemaphoreSlim) fail to prevent race conditions between instances, leading to duplicate operations, data inconsistencies, and conflicts. Drawing from experience building the ABP Framework, the article examines various solutions—database-based locks, Redis, Azure B</description>
      <pubDate>Fri, 03 Oct 2025 03:21:12 Z</pubDate>
      <a10:updated>2026-03-06T20:29:02Z</a10:updated>
      <content:encoded><![CDATA[<h1>Why Do You Need Distributed Locking in ASP.NET Core</h1>
<h2>Introduction</h2>
<p>In modern distributed systems, synchronizing access to common resources among numerous instances is a critical problem. Whenever lots of servers or processes concurrently attempt to update the same resource simultaneously, race conditions can lead to data corruption, redundant work, and inconsistent state. Throughout the implementation of the ABP framework, we encountered and overcame this exact same problem with assistance from a stable distributed locking mechanism. In this post, we will present our experience and learnings when implementing this solution, so you can understand when and why you would need distributed locking in your ASP.NET Core applications.</p>
<h2>Problem</h2>
<p>Suppose you are running an e-commerce application deployed on multiple servers for high availability. A customer places an order, which kicks off a background job that reserves inventory and charges payment. If not properly synchronized, the following is what can happen:</p>
<h3>Race Conditions in Multi-Instance Deployments</h3>
<p>When your ASP.NET Core application is scaled horizontally with multiple instances, each instance works independently. If two instances simultaneously perform the same operation—like deducting inventory, generating invoice numbers, or processing a refund—you can end up with:</p>
<ul>
<li><strong>Duplicate operations</strong>: The same payment processed twice</li>
<li><strong>Data inconsistency</strong>: Inventory count becomes negative or incorrect</li>
<li><strong>Lost updates</strong>: One instance's changes overwrite another's</li>
<li><strong>Sequential ID conflicts</strong>: Two instances generate the same invoice number</li>
</ul>
<h3>Background Job Processing</h3>
<p>Background work libraries like Quartz.NET or Hangfire usually run on multiple workers. Without distributed locking:</p>
<ul>
<li>Multiple workers can choose the same task</li>
<li>Long-running processes can be executed parallel when they should be executed in a sequence</li>
<li>Jobs that depend on exclusive resource access can corrupt shared data</li>
</ul>
<h3>Cache Invalidation and Refresh</h3>
<p>When distributed caching is employed, there can be multiple instances that simultaneously identify a cache miss and attempt to rebuild the cache, leading to:</p>
<ul>
<li>High database load owing to concurrent rebuild cache requests</li>
<li>Race conditions under which older data overrides newer data</li>
<li>wasted computational resources</li>
</ul>
<h3>Rate Limiting and Throttling</h3>
<p>Enforcing rate limits across multiple instances of the application requires coordination. If there is no distributed locking, each instance has its own limits, and global rate limits cannot be enforced properly.</p>
<p>The root issue is simple: <strong>the default C# locking APIs (lock, SemaphoreSlim, Monitor) work within a process in isolation</strong>. They will not assist with distributed cases where coordination must take place across servers, containers, or cloud instances.</p>
<h2>Solutions</h2>
<p>Several approaches exist for implementing distributed locking in ASP.NET Core applications. Let's explore the most common solutions, their trade-offs, and why we chose our approach for ABP.</p>
<h3>1. Database-Based Locking</h3>
<p>Using your existing database to place locks by inserting or updating rows with distinctive values.</p>
<p><strong>Pros:</strong></p>
<ul>
<li>No additional infrastructure required</li>
<li>Works with any relational database</li>
<li>Transactions provide ACID guarantees</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li>Database round-trip performance overhead</li>
<li>Can lead to database contention under high load</li>
<li>Must be controlled to prevent orphaned locks</li>
<li>Not suited for high-frequency locking scenarios</li>
</ul>
<p><strong>When to use:</strong> Small-scale applications where you do not wish to add additional infrastructure, and lock operations are low frequency.</p>
<h3>2. Redis-Based Locking</h3>
<p>Redis has atomic operations that make it excellent at distributed locking, using commands such as <code>SET NX</code> (set if not exists) with expiration.
<strong>Pros:</strong></p>
<ul>
<li><p>Low latency and high performance</p>
</li>
<li><p>Expiration prevents lost locks built-in</p>
</li>
<li><p>Well-established with tested patterns (Redlock algorithm)</p>
</li>
<li><p>Works well for high-throughput use cases
<strong>Cons:</strong></p>
</li>
<li><p>Requires Redis infrastructure</p>
</li>
<li><p>Network partitions might be an issue</p>
</li>
<li><p>One Redis instance is a single point of failure (although Redis Cluster reduces it)
<strong>Resources:</strong></p>
</li>
<li><p><a href="https://redis.io/docs/manual/patterns/distributed-locks/">Redis Distributed Locks Documentation</a></p>
</li>
<li><p><a href="https://redis.io/topics/distlock">Redlock Algorithm</a>
<strong>When to use:</strong> Production applications with multiple instances where performance is critical, especially if you are already using Redis as a caching layer.</p>
</li>
</ul>
<h3>3. Azure Blob Storage Leases</h3>
<p>Azure Blob Storage offers lease functionality which can be utilized for distributed locks.</p>
<p><strong>Pros:</strong></p>
<ul>
<li>Part of Azure, no extra infrastructure</li>
<li>Lease expiration automatically</li>
<li>Low-frequency locks are economically viable</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li>Azure-specific, not portable</li>
<li>Latency greater than Redis</li>
<li>Azure cloud-only projects</li>
</ul>
<p><strong>When to use:</strong> Azure-native applications with low-locking frequency where you need to minimize moving parts.</p>
<h3>4. etcd or ZooKeeper</h3>
<p>Distributed coordination services designed from scratch to accommodate consensus and locking.</p>
<p><strong>Pros:</strong></p>
<ul>
<li>Designed for distributed coordination</li>
<li>Strong consistency guaranteed</li>
<li>Robust against network partitions</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li>Difficulty in setting up the infrastructure</li>
<li>Excess baggage for most applications</li>
<li>Steep learning curve</li>
</ul>
<p><strong>Use when:</strong> Large distributed systems with complex coordination require more than basic locking.</p>
<h3>Our Choice: Abstraction with Multiple Implementations</h3>
<p>For ABP, we chose to use an <strong>abstraction layer</strong> with support for multibackend. This provides flexibility to the developers so that they can choose the best implementation depending on their infrastructure. Our default implementations include support for:</p>
<ul>
<li><strong>Redis</strong> (recommended for most scenarios)</li>
<li><strong>Database-based locking</strong> (for less complicated configurations)</li>
<li>In-memory single-instance and development locks</li>
</ul>
<p>We started with Redis because it offers the best tradeoff between ease of operation, reliability, and performance for distributed cases. But abstraction prevents applications from becoming technology-dependent, and it's easier to start simple and expand as needed.</p>
<h2>Implementation</h2>
<p>Let's implement a simplified distributed locking mechanism using Redis and StackExchange.Redis. This example shows the core concepts without ABP's framework complexity.</p>
<p>First, install the required package:</p>
<pre><code class="language-bash">dotnet add package StackExchange.Redis
</code></pre>
<p>Here's a basic distributed lock implementation:</p>
<pre><code class="language-csharp">public interface IDistributedLock
{
    Task&lt;IDisposable?&gt; TryAcquireAsync(
        string resource,
        TimeSpan expirationTime,
        CancellationToken cancellationToken = default);
}

public class RedisDistributedLock : IDistributedLock
{
    private readonly IConnectionMultiplexer _redis;
    private readonly ILogger&lt;RedisDistributedLock&gt; _logger;

    public RedisDistributedLock(
        IConnectionMultiplexer redis,
        ILogger&lt;RedisDistributedLock&gt; logger)
    {
        _redis = redis;
        _logger = logger;
    }

    public async Task&lt;IDisposable?&gt; TryAcquireAsync(
        string resource,
        TimeSpan expirationTime,
        CancellationToken cancellationToken = default)
    {
        var db = _redis.GetDatabase();
        var lockKey = $&quot;lock:{resource}&quot;;
        var lockValue = Guid.NewGuid().ToString();

        // Try to acquire the lock using SET NX with expiration
        var acquired = await db.StringSetAsync(
            lockKey,
            lockValue,
            expirationTime,
            When.NotExists);

        if (!acquired)
        {
            _logger.LogDebug(
                &quot;Failed to acquire lock for resource: {Resource}&quot;,
                resource);
            return null;
        }

        _logger.LogDebug(
            &quot;Lock acquired for resource: {Resource}&quot;,
            resource);

        return new RedisLockHandle(db, lockKey, lockValue, _logger);
    }

    private class RedisLockHandle : IDisposable
    {
        private readonly IDatabase _db;
        private readonly string _lockKey;
        private readonly string _lockValue;
        private readonly ILogger _logger;
        private bool _disposed;

        public RedisLockHandle(
            IDatabase db,
            string lockKey,
            string lockValue,
            ILogger logger)
        {
            _db = db;
            _lockKey = lockKey;
            _lockValue = lockValue;
            _logger = logger;
        }

        public void Dispose()
        {
            if (_disposed) return;

            try
            {
                // Only delete if we still own the lock
                var script = @&quot;
                    if redis.call('get', KEYS[1]) == ARGV[1] then
                        return redis.call('del', KEYS[1])
                    else
                        return 0
                    end&quot;;

                _db.ScriptEvaluate(
                    script,
                    new RedisKey[] { _lockKey },
                    new RedisValue[] { _lockValue });

                _logger.LogDebug(&quot;Lock released for key: {LockKey}&quot;, _lockKey);
            }
            catch (Exception ex)
            {
                _logger.LogError(
                    ex,
                    &quot;Error releasing lock for key: {LockKey}&quot;,
                    _lockKey);
            }
            finally
            {
                _disposed = true;
            }
        }
    }
}
</code></pre>
<p>Register the service in your <code>Program.cs</code>:</p>
<pre><code class="language-csharp">builder.Services.AddSingleton&lt;IConnectionMultiplexer&gt;(sp =&gt;
{
    var configuration = ConfigurationOptions.Parse(&quot;localhost:6379&quot;);
    return ConnectionMultiplexer.Connect(configuration);
});

builder.Services.AddSingleton&lt;IDistributedLock, RedisDistributedLock&gt;();
</code></pre>
<p>Now you can use distributed locking in your services:</p>
<pre><code class="language-csharp">public class OrderService
{
    private readonly IDistributedLock _distributedLock;
    private readonly ILogger&lt;OrderService&gt; _logger;

    public OrderService(
        IDistributedLock distributedLock,
        ILogger&lt;OrderService&gt; logger)
    {
        _distributedLock = distributedLock;
        _logger = logger;
    }

    public async Task ProcessOrderAsync(string orderId)
    {
        var lockResource = $&quot;order:{orderId}&quot;;
        
        // Try to acquire the lock with 30-second expiration
        await using var lockHandle = await _distributedLock.TryAcquireAsync(
            lockResource,
            TimeSpan.FromSeconds(30));

        if (lockHandle == null)
        {
            _logger.LogWarning(
                &quot;Could not acquire lock for order {OrderId}. &quot; +
                &quot;Another process might be processing it.&quot;,
                orderId);
            return;
        }

        // Critical section - only one instance will execute this
        _logger.LogInformation(&quot;Processing order {OrderId}&quot;, orderId);
        
        // Your order processing logic here
        await Task.Delay(1000); // Simulating work
        
        _logger.LogInformation(
            &quot;Order {OrderId} processed successfully&quot;,
            orderId);
        
        // Lock is automatically released when lockHandle is disposed
    }
}
</code></pre>
<h3>Key Implementation Details</h3>
<p><strong>Lock Key Uniqueness</strong>: Use hierarchical, descriptive keys (<code>order:12345</code>, <code>inventory:product-456</code>) to avoid collisions.</p>
<p><strong>Lock Value</strong>: We use a single distinct GUID as the lock value. This ensures only the lock owner can release it, excluding unintentional deletion by expired locks or other operations.</p>
<p><strong>Automatic Expiration</strong>: Always provide an expiration time to prevent deadlocks when a process halts with an outstanding lock.</p>
<p><strong>Lua Script for Release</strong>: Releasing uses a Lua script to atomically check ownership and delete the key. This prevents releasing a lock that has already timed out and is reacquired by another process.</p>
<p><strong>Disposal Pattern</strong>: With <code>IDisposable</code> and <code>await using</code>, one ensures that the lock is released regardless of the exception that occurs.</p>
<h3>Handling Lock Acquisition Failures</h3>
<p>Depending on your use case, you have several options when lock acquisition fails:</p>
<pre><code class="language-csharp">// Option 1: Return early (shown above)
if (lockHandle == null)
{
    return;
}

// Option 2: Retry with timeout
var retryCount = 0;
var maxRetries = 3;
IDisposable? lockHandle = null;

while (lockHandle == null &amp;&amp; retryCount &lt; maxRetries)
{
    lockHandle = await _distributedLock.TryAcquireAsync(
        lockResource,
        TimeSpan.FromSeconds(30));
    
    if (lockHandle == null)
    {
        retryCount++;
        await Task.Delay(TimeSpan.FromMilliseconds(100 * retryCount));
    }
}

if (lockHandle == null)
{
    throw new InvalidOperationException(&quot;Could not acquire lock after retries&quot;);
}

// Option 3: Queue for later processing
if (lockHandle == null)
{
    await _queueService.EnqueueForLaterAsync(orderId);
    return;
}
</code></pre>
<p>This is a good foundation for distributed locking in ASP.NET Core applications. It addresses the most common scenarios and edge cases, but production can call for more sophisticated features like lock re-renewal for long-running operations or more sophisticated retry logic.</p>
<h2>Conclusion</h2>
<p>Distributed locking is a necessity for data consistency and prevention of race conditions in new, scalable ASP.NET Core applications. As we've discussed, the problem becomes unavoidable as soon as you move beyond single-instance deployments to horizontally scaled multi-server, container, or background job worker deployments.</p>
<p>We examined several of them, from database-level locks to Redis, Azure Blob Storage leases, and coordination services. Each has its place, but Redis-based locking offers the best balance of performance, reliability, and ease in most situations. The example implementation we provided shows how to implement a well-crafted distributed locking mechanism with minimal dependence on other libraries.</p>
<p>Whether you implement your own solution or utilize a framework like ABP, familiarity with the concepts of distributed locking will help you build more stable and scalable applications. We hope by sharing our experience, we can keep you from falling into typical pitfalls and have distributed locking properly implemented on your own projects.</p>
]]></content:encoded>
      <media:thumbnail url="https://abp.io/api/posts/cover-picture-source/3a1cba43-5d74-c60d-adc7-a847c8769421" />
      <media:content url="https://abp.io/api/posts/cover-picture-source/3a1cba43-5d74-c60d-adc7-a847c8769421" medium="image" />
    </item>
    <item>
      <guid isPermaLink="true">https://abp.io/community/posts/stepbystep-aws-secrets-manager-integration-in-abp-framework-projects-3dcblyix</guid>
      <link>https://abp.io/community/posts/stepbystep-aws-secrets-manager-integration-in-abp-framework-projects-3dcblyix</link>
      <a10:author>
        <a10:name>fahrigedik</a10:name>
        <a10:uri>https://abp.io/community/members/fahrigedik</a10:uri>
      </a10:author>
      <category>abp</category>
      <category>tutorial</category>
      <category>best-practices</category>
      <category>abp-framework</category>
      <category>application-configuration</category>
      <title>Step-by-Step AWS Secrets Manager Integration in ABP Framework Projects</title>
      <description>The article shows how to use AWS Secrets Manager in an ABP Framework app to keep secrets like connection strings safe instead of putting them in config files. It explains creating a secret, giving IAM permission, reading it in the app, and adding best practices like caching, rotation, and health checks.</description>
      <pubDate>Mon, 15 Sep 2025 07:28:28 Z</pubDate>
      <a10:updated>2026-03-06T22:40:59Z</a10:updated>
      <content:encoded><![CDATA[<h1>Step-by-Step AWS Secrets Manager Integration in ABP Framework Projects</h1>
<h2>Introduction</h2>
<p>In this article, we are going to discuss how to secure sensitive data in ABP Framework projects using AWS Secrets Manager and explain various aspects and concepts of <em>secret</em> data management. We will explain step-by-step AWS Secrets Manager integration.</p>
<h2>What is the Problem?</h2>
<p>Modern applications must store sensitive data such as API keys, database connection strings, OAuth client credentials, and other similar sensitive data. These are at the center of functionality but if stored in the wrong place can be massive security issues.</p>
<p>At build time, the first place that comes to mind is usually <strong>appsettings.json</strong>. This is a configuration file; it is not a secure place to store secret information, especially in production.</p>
<h3>Common Security Risks:</h3>
<ul>
<li><strong>Plain text storage</strong>: Plain text storage of passwords</li>
<li><strong>Exposure to version control</strong>: Secrets are rendered encrypted in Git repositories</li>
<li><strong>No access control</strong>: Anyone who has file access can see the secrets</li>
<li><strong>No rotation</strong>: We must change them manually</li>
<li><strong>No audit trail</strong>: Who accessed which secret when is not known</li>
</ul>
<h2>.NET User Secrets Tool vs AWS Secrets Manager</h2>
<p><strong>User Secrets (.NET Secret Manager Tools)</strong> is a dev environment only, local file-based solution that keeps sensitive information out of the repository.</p>
<p><strong>AWS Secrets Manager</strong> is production. It's a centralized, encrypted, and audited secret management service.</p>
<p>| Feature                | User Secrets (Dev)           | AWS Secrets Manager (Prod)    |
| ---------------------- | ---------------------------- | ------------------------------ |
| Scope                  | Local developer machine      | All environments (dev/stage/prod) |
| Storage                | JSON in user profile         | Managed service (centralized) |
| Encryption             | None (plain text file)       | Encrypted with KMS            |
| Access Control         | OS file permissions          | IAM policies                  |
| Rotation               | None                         | Yes (automatic)               |
| Audit / Traceability   | None                         | Yes (CloudTrail)              |
| Typical Usage          | Quick dev outside repo       | Production secret management  |</p>
<hr />
<h2>AWS Secrets Manager</h2>
<p>Especially designed to securely store and handle sensitive and confidential data for our applications. It even supports features such as secret rotation, replication, and many more.</p>
<p>AWS Secrets Manager offers a trial of 30 days. After that, there is a $0.40 USD/month charge per stored secret. There is also a $0.05 USD fee per 10,000 API requests.</p>
<h3>Key Features:</h3>
<ul>
<li><strong>Automatic encryption</strong>: KMS automatic encryption</li>
<li><strong>Automatic rotation</strong>: Scheduled secret rotation</li>
<li><strong>Fine-grained access control</strong>: IAM fine-grained access control</li>
<li><strong>Audit logging</strong>: Full audit logging with CloudTrail</li>
<li><strong>Cross-region replication</strong>: Cross-region replication</li>
<li><strong>API integration</strong>: Programmatic access support</li>
</ul>
<hr />
<h2>Step 1: AWS Secrets Manager Setup</h2>
<h3>1.1 Creating a Secret in AWS Console</h3>
<p>First, search for the Secrets Manager service in the AWS Management Console.</p>
<ol>
<li><p><strong>AWS Console</strong> → <strong>Secrets Manager</strong> → <strong>Store a new secret</strong></p>
</li>
<li><p>Select <strong>Secret type</strong>:</p>
<ul>
<li><strong>Other type of secret</strong> (For custom key-value pairs)</li>
<li><strong>Credentials for RDS database</strong> (For databases)</li>
<li><strong>Credentials for DocumentDB database</strong></li>
<li><strong>Credentials for Redshift cluster</strong></li>
</ul>
</li>
<li><p>Enter <strong>Secret value</strong>:</p>
</li>
</ol>
<pre><code class="language-json">{
  &quot;ConnectionString&quot;: &quot;Server=myserver;Database=mydb;User Id=myuser;Password=mypassword;&quot;
}
</code></pre>
<ol start="4">
<li>Set <strong>Secret name</strong>: <code>prod/ABPAWSTest/ConnectionString</code></li>
<li>Add <strong>Description</strong>: &quot;ABP Framework connection string for production&quot;</li>
<li>Choose <strong>Encryption key</strong> (default KMS key is sufficient)</li>
<li>Configure <strong>Automatic rotation</strong> settings (optional)</li>
</ol>
<h3>1.2 IAM Permissions</h3>
<p>Create an IAM policy for secret access:</p>
<pre><code class="language-json">{
    &quot;Version&quot;: &quot;2012-10-17&quot;,
    &quot;Statement&quot;: [
        {
            &quot;Effect&quot;: &quot;Allow&quot;,
            &quot;Action&quot;: [
                &quot;secretsmanager:GetSecretValue&quot;,
                &quot;secretsmanager:DescribeSecret&quot;
            ],
            &quot;Resource&quot;: &quot;arn:aws:secretsmanager:eu-north-1:588118819172:secret:prod/ABPAWSTest/ConnectionString-*&quot;
        }
    ]
}
</code></pre>
<hr />
<h2>Step 2: ABP Framework Project Setup</h2>
<h3>2.1 NuGet Packages</h3>
<p>Add the required AWS packages to your project:</p>
<pre><code class="language-bash">dotnet add package AWSSDK.SecretsManager
dotnet add package AWSSDK.Extensions.NETCore.Setup
</code></pre>
<h3>2.2 Configuration Files</h3>
<p><strong>appsettings.json</strong> (Development):</p>
<pre><code class="language-json">{
  &quot;AWS&quot;: {
    &quot;Profile&quot;: &quot;default&quot;,
    &quot;Region&quot;: &quot;eu-north-1&quot;,
    &quot;AccessKey&quot;: &quot;YOUR_ACCESS_KEY&quot;,
    &quot;SecretKey&quot;: &quot;YOUR_SECRET_KEY&quot;
  },
  &quot;SecretsManager&quot;: {
    &quot;SecretName&quot;: &quot;prod/ABPAWSTest/ConnectionString&quot;,
    &quot;SecretArn&quot;: &quot;arn:aws:secretsmanager:eu-north-1:588118819172:secret:prod/ABPAWSTest/ConnectionString-xtYQxv&quot;
  }
}
</code></pre>
<p><strong>appsettings.Production.json</strong> (Production):</p>
<pre><code class="language-json">{
  &quot;AWS&quot;: {
    &quot;Region&quot;: &quot;eu-north-1&quot;
    // Use environment variables or IAM roles in production
  },
  &quot;SecretsManager&quot;: {
    &quot;SecretName&quot;: &quot;prod/ABPAWSTest/ConnectionString&quot;
  }
}
</code></pre>
<h3>2.3 Environment Variables (Production)</h3>
<pre><code class="language-bash">export AWS_ACCESS_KEY_ID=your_access_key
export AWS_SECRET_ACCESS_KEY=your_secret_key
export AWS_DEFAULT_REGION=eu-north-1
</code></pre>
<hr />
<h2>Step 3: AWS Integration Implementation</h2>
<h3>3.1 Program.cs Configuration</h3>
<pre><code class="language-csharp">using Amazon;
using Amazon.SecretsManager;

public class Program
{
    public async static Task&lt;int&gt; Main(string[] args)
    {
        var builder = WebApplication.CreateBuilder(args);
        
        // AWS Secrets Manager configuration
        var awsOptions = builder.Configuration.GetAWSOptions();
        
        // Read AWS credentials from appsettings
        var accessKey = builder.Configuration[&quot;AWS:AccessKey&quot;];
        var secretKey = builder.Configuration[&quot;AWS:SecretKey&quot;];
        var region = builder.Configuration[&quot;AWS:Region&quot;];
        
        if (!string.IsNullOrEmpty(accessKey) &amp;&amp; !string.IsNullOrEmpty(secretKey))
        {
            awsOptions.Credentials = new Amazon.Runtime.BasicAWSCredentials(accessKey, secretKey);
        }
        
        if (!string.IsNullOrEmpty(region))
        {
            awsOptions.Region = RegionEndpoint.GetBySystemName(region);
        }
        
        builder.Services.AddDefaultAWSOptions(awsOptions);
        builder.Services.AddAWSService&lt;IAmazonSecretsManager&gt;();
        
        // ... ABP configuration
        await builder.AddApplicationAsync&lt;YourAppModule&gt;();
        var app = builder.Build();
        
        await app.InitializeApplicationAsync();
        await app.RunAsync();
    }
}
</code></pre>
<h3>3.2 Secrets Manager Service</h3>
<p><strong>Interface:</strong></p>
<pre><code class="language-csharp">public interface ISecretsManagerService
{
    Task&lt;string&gt; GetSecretAsync(string secretName);
    Task&lt;T&gt; GetSecretAsync&lt;T&gt;(string secretName) where T : class;
    Task&lt;string&gt; GetConnectionStringAsync();
}
</code></pre>
<p><strong>Implementation:</strong></p>
<pre><code class="language-csharp">using Amazon.SecretsManager;
using Amazon.SecretsManager.Model;
using Volo.Abp.DependencyInjection;
using System.Text.Json;

public class SecretsManagerService : ISecretsManagerService, IScopedDependency
{
    private readonly IAmazonSecretsManager _secretsManager;
    private readonly IConfiguration _configuration;
    private readonly ILogger&lt;SecretsManagerService&gt; _logger;

    public SecretsManagerService(
        IAmazonSecretsManager secretsManager,
        IConfiguration configuration,
        ILogger&lt;SecretsManagerService&gt; logger)
    {
        _secretsManager = secretsManager;
        _configuration = configuration;
        _logger = logger;
    }

    public async Task&lt;string&gt; GetSecretAsync(string secretName)
    {
        try
        {
            var request = new GetSecretValueRequest
            {
                SecretId = secretName
            };

            var response = await _secretsManager.GetSecretValueAsync(request);
            
            _logger.LogInformation(&quot;Successfully retrieved secret: {SecretName}&quot;, secretName);
            
            return response.SecretString;
        }
        catch (Exception ex)
        {
            _logger.LogError(ex, &quot;Failed to retrieve secret: {SecretName}&quot;, secretName);
            throw;
        }
    }

    public async Task&lt;T&gt; GetSecretAsync&lt;T&gt;(string secretName) where T : class
    {
        var secretValue = await GetSecretAsync(secretName);
        
        try
        {
            return JsonSerializer.Deserialize&lt;T&gt;(secretValue) 
                ?? throw new InvalidOperationException($&quot;Failed to deserialize secret {secretName}&quot;);
        }
        catch (JsonException ex)
        {
            _logger.LogError(ex, &quot;Failed to deserialize secret {SecretName}&quot;, secretName);
            throw;
        }
    }

    public async Task&lt;string&gt; GetConnectionStringAsync()
    {
        var secretName = _configuration[&quot;SecretsManager:SecretName&quot;] 
            ?? throw new InvalidOperationException(&quot;SecretsManager:SecretName configuration is missing&quot;);
            
        return await GetSecretAsync(secretName);
    }
}
</code></pre>
<hr />
<h2>Step 4: Usage Examples</h2>
<h3>4.1 Using in Application Service</h3>
<pre><code class="language-csharp">[RemoteService(false)]
public class DatabaseService : ApplicationService
{
    private readonly ISecretsManagerService _secretsManager;
    
    public DatabaseService(ISecretsManagerService secretsManager)
    {
        _secretsManager = secretsManager;
    }
    
    public async Task&lt;string&gt; GetDatabaseConnectionAsync()
    {
        // Get connection string from AWS Secrets Manager
        var connectionString = await _secretsManager.GetConnectionStringAsync();
        
        // Use the connection string
        return connectionString;
    }
    
    public async Task&lt;ApiConfiguration&gt; GetApiConfigAsync()
    {
        // Deserialize JSON secret
        var config = await _secretsManager.GetSecretAsync&lt;ApiConfiguration&gt;(&quot;prod/MyApp/ApiConfig&quot;);
        
        return config;
    }
}
</code></pre>
<h3>4.2 DbContext Configuration</h3>
<pre><code class="language-csharp">public class YourDbContextConfigurer
{
    public static void Configure(DbContextOptionsBuilder&lt;YourDbContext&gt; builder, string connectionString)
    {
        builder.UseSqlServer(connectionString);
    }

    public static void Configure(DbContextOptionsBuilder&lt;YourDbContext&gt; builder, DbConnection connection)
    {
        builder.UseSqlServer(connection);
    }
}

// Usage in Module
public override void ConfigureServices(ServiceConfigurationContext context)
{
    var configuration = context.Services.GetConfiguration();
    var secretsManager = context.Services.GetRequiredService&lt;ISecretsManagerService&gt;();
    
    // Get secret at startup and pass to DbContext
    var connectionString = await secretsManager.GetConnectionStringAsync();
    
    context.Services.AddAbpDbContext&lt;YourDbContext&gt;(options =&gt;
    {
        options.AddDefaultRepositories(includeAllEntities: true);
        options.DbContextOptions.UseSqlServer(connectionString);
    });
}
</code></pre>
<hr />
<h2>Step 5: Best Practices &amp; Security</h2>
<h3>5.1 Security Best Practices</h3>
<ol>
<li><p><strong>Environment-based Configuration:</strong></p>
<ul>
<li>Development: appsettings.json</li>
<li>Production: Environment variables or IAM roles</li>
</ul>
</li>
<li><p><strong>Principle of Least Privilege:</strong></p>
<pre><code class="language-json">{
  &quot;Effect&quot;: &quot;Allow&quot;,
  &quot;Action&quot;: &quot;secretsmanager:GetSecretValue&quot;,
  &quot;Resource&quot;: &quot;arn:aws:secretsmanager:region:account:secret:specific-secret-*&quot;
}
</code></pre>
</li>
<li><p><strong>Secret Rotation:</strong></p>
<ul>
<li>Set up automatic rotation</li>
<li>Custom rotation logic with Lambda functions</li>
</ul>
</li>
<li><p><strong>Caching Strategy:</strong></p>
<pre><code class="language-csharp">public class CachedSecretsManagerService : ISecretsManagerService
{
    private readonly IMemoryCache _cache;
    private readonly SecretsManagerService _secretsManager;

    public async Task&lt;string&gt; GetSecretAsync(string secretName)
    {
        var cacheKey = $&quot;secret:{secretName}&quot;;

        if (_cache.TryGetValue(cacheKey, out string cachedValue))
        {
            return cachedValue;
        }

        var value = await _secretsManager.GetSecretAsync(secretName);

        _cache.Set(cacheKey, value, TimeSpan.FromMinutes(30));

        return value;
    }
}
</code></pre>
</li>
</ol>
<h3>5.2 Error Handling</h3>
<pre><code class="language-csharp">public async Task&lt;string&gt; GetSecretWithRetryAsync(string secretName)
{
    const int maxRetries = 3;
    var delay = TimeSpan.FromSeconds(1);
    
    for (int i = 0; i &lt; maxRetries; i++)
    {
        try
        {
            return await GetSecretAsync(secretName);
        }
        catch (AmazonSecretsManagerException ex) when (i &lt; maxRetries - 1)
        {
            _logger.LogWarning(ex, &quot;Retry {Attempt} for secret {SecretName}&quot;, i + 1, secretName);
            await Task.Delay(delay);
            delay = TimeSpan.FromMilliseconds(delay.TotalMilliseconds * 2); // Exponential backoff
        }
    }
    
    throw new InvalidOperationException($&quot;Failed to retrieve secret {secretName} after {maxRetries} attempts&quot;);
}
</code></pre>
<h3>5.3 Performance Optimization</h3>
<pre><code class="language-csharp">public class PerformantSecretsManagerService : ISecretsManagerService
{
    private readonly IAmazonSecretsManager _secretsManager;
    private readonly IMemoryCache _cache;
    private readonly ILogger&lt;PerformantSecretsManagerService&gt; _logger;
    private readonly SemaphoreSlim _semaphore = new(1, 1);

    public async Task&lt;string&gt; GetSecretAsync(string secretName)
    {
        var cacheKey = $&quot;secret:{secretName}&quot;;
        
        // Try to get from cache first
        if (_cache.TryGetValue(cacheKey, out string cachedValue))
        {
            return cachedValue;
        }

        // Use semaphore to prevent multiple concurrent requests for the same secret
        await _semaphore.WaitAsync();
        try
        {
            // Double-check pattern
            if (_cache.TryGetValue(cacheKey, out cachedValue))
            {
                return cachedValue;
            }

            // Fetch from AWS
            var value = await GetSecretFromAwsAsync(secretName);
            
            // Cache for 30 minutes
            _cache.Set(cacheKey, value, TimeSpan.FromMinutes(30));
            
            return value;
        }
        finally
        {
            _semaphore.Release();
        }
    }
}
</code></pre>
<hr />
<h2>Step 6: Testing &amp; Debugging</h2>
<h3>6.1 Unit Testing</h3>
<pre><code class="language-csharp">public class SecretsManagerServiceTests : AbpIntegratedTest&lt;TestModule&gt;
{
    private readonly ISecretsManagerService _secretsManager;
    
    public SecretsManagerServiceTests()
    {
        _secretsManager = GetRequiredService&lt;ISecretsManagerService&gt;();
    }
    
    [Fact]
    public async Task Should_Get_Connection_String()
    {
        // Act
        var connectionString = await _secretsManager.GetConnectionStringAsync();
        
        // Assert
        connectionString.ShouldNotBeNullOrEmpty();
        connectionString.ShouldContain(&quot;Server=&quot;);
    }
    
    [Fact]
    public async Task Should_Deserialize_Json_Secret()
    {
        // Arrange
        var secretName = &quot;test/json/config&quot;;
        
        // Act
        var config = await _secretsManager.GetSecretAsync&lt;TestConfig&gt;(secretName);
        
        // Assert
        config.ShouldNotBeNull();
        config.ApiKey.ShouldNotBeNullOrEmpty();
    }
}
</code></pre>
<h3>6.2 Mock Implementation for Testing</h3>
<pre><code class="language-csharp">public class MockSecretsManagerService : ISecretsManagerService, ISingletonDependency
{
    private readonly Dictionary&lt;string, string&gt; _secrets = new()
    {
        [&quot;prod/ABPAWSTest/ConnectionString&quot;] = &quot;Server=localhost;Database=TestDb;Trusted_Connection=true;&quot;,
        [&quot;prod/MyApp/ApiKey&quot;] = &quot;test-api-key&quot;,
        [&quot;prod/MyApp/Config&quot;] = &quot;&quot;&quot;{&quot;ApiUrl&quot;: &quot;https://api.test.com&quot;, &quot;Timeout&quot;: 30}&quot;&quot;&quot;
    };

    public Task&lt;string&gt; GetSecretAsync(string secretName)
    {
        if (_secrets.TryGetValue(secretName, out var secret))
        {
            return Task.FromResult(secret);
        }
        
        throw new ArgumentException($&quot;Unknown secret: {secretName}&quot;);
    }
    
    public async Task&lt;T&gt; GetSecretAsync&lt;T&gt;(string secretName) where T : class
    {
        var json = await GetSecretAsync(secretName);
        return JsonSerializer.Deserialize&lt;T&gt;(json) 
            ?? throw new InvalidOperationException($&quot;Failed to deserialize {secretName}&quot;);
    }
    
    public Task&lt;string&gt; GetConnectionStringAsync()
    {
        return GetSecretAsync(&quot;prod/ABPAWSTest/ConnectionString&quot;);
    }
}
</code></pre>
<h3>6.3 Integration Testing</h3>
<pre><code class="language-csharp">public class SecretsManagerIntegrationTests : IClassFixture&lt;WebApplicationFactory&lt;Program&gt;&gt;
{
    private readonly WebApplicationFactory&lt;Program&gt; _factory;
    private readonly HttpClient _client;

    public SecretsManagerIntegrationTests(WebApplicationFactory&lt;Program&gt; factory)
    {
        _factory = factory;
        _client = _factory.CreateClient();
    }

    [Fact]
    public async Task Should_Connect_To_Database_With_Secret()
    {
        // Arrange &amp; Act
        var response = await _client.GetAsync(&quot;/api/health&quot;);
        
        // Assert
        response.EnsureSuccessStatusCode();
    }
}
</code></pre>
<hr />
<h2>Step 7: Monitoring &amp; Observability</h2>
<h3>7.1 CloudWatch Metrics</h3>
<pre><code class="language-csharp">public class MonitoredSecretsManagerService : ISecretsManagerService
{
    private readonly ISecretsManagerService _inner;
    private readonly IMetrics _metrics;
    private readonly ILogger&lt;MonitoredSecretsManagerService&gt; _logger;

    public async Task&lt;string&gt; GetSecretAsync(string secretName)
    {
        using var activity = Activity.StartActivity(&quot;SecretsManager.GetSecret&quot;);
        activity?.SetTag(&quot;secret.name&quot;, secretName);
        
        var stopwatch = Stopwatch.StartNew();
        
        try
        {
            var result = await _inner.GetSecretAsync(secretName);
            
            _metrics.Counter(&quot;secrets_manager.requests&quot;)
                .WithTag(&quot;secret_name&quot;, secretName)
                .WithTag(&quot;status&quot;, &quot;success&quot;)
                .Increment();
                
            _metrics.Timer(&quot;secrets_manager.duration&quot;)
                .WithTag(&quot;secret_name&quot;, secretName)
                .Record(stopwatch.ElapsedMilliseconds);
            
            return result;
        }
        catch (Exception ex)
        {
            _metrics.Counter(&quot;secrets_manager.requests&quot;)
                .WithTag(&quot;secret_name&quot;, secretName)
                .WithTag(&quot;status&quot;, &quot;error&quot;)
                .WithTag(&quot;error_type&quot;, ex.GetType().Name)
                .Increment();
                
            _logger.LogError(ex, &quot;Failed to retrieve secret {SecretName}&quot;, secretName);
            throw;
        }
    }
}
</code></pre>
<h3>7.2 Health Checks</h3>
<pre><code class="language-csharp">public class SecretsManagerHealthCheck : IHealthCheck
{
    private readonly IAmazonSecretsManager _secretsManager;
    private readonly ILogger&lt;SecretsManagerHealthCheck&gt; _logger;

    public SecretsManagerHealthCheck(
        IAmazonSecretsManager secretsManager,
        ILogger&lt;SecretsManagerHealthCheck&gt; logger)
    {
        _secretsManager = secretsManager;
        _logger = logger;
    }

    public async Task&lt;HealthCheckResult&gt; CheckHealthAsync(
        HealthCheckContext context,
        CancellationToken cancellationToken = default)
    {
        try
        {
            // Try to list secrets to verify connection
            var request = new ListSecretsRequest { MaxResults = 1 };
            await _secretsManager.ListSecretsAsync(request, cancellationToken);
            
            return HealthCheckResult.Healthy(&quot;AWS Secrets Manager is accessible&quot;);
        }
        catch (Exception ex)
        {
            _logger.LogError(ex, &quot;AWS Secrets Manager health check failed&quot;);
            return HealthCheckResult.Unhealthy(&quot;AWS Secrets Manager is not accessible&quot;, ex);
        }
    }
}

// Register in Program.cs
builder.Services.AddHealthChecks()
    .AddCheck&lt;SecretsManagerHealthCheck&gt;(&quot;secrets-manager&quot;);
</code></pre>
<hr />
<h2>Step 8: Advanced Scenarios</h2>
<h3>8.1 Dynamic Configuration Reload</h3>
<pre><code class="language-csharp">public class DynamicSecretsConfigurationProvider : ConfigurationProvider, IDisposable
{
    private readonly ISecretsManagerService _secretsManager;
    private readonly Timer _reloadTimer;
    private readonly string _secretName;

    public DynamicSecretsConfigurationProvider(
        ISecretsManagerService secretsManager,
        string secretName)
    {
        _secretsManager = secretsManager;
        _secretName = secretName;
        
        // Reload every 5 minutes
        _reloadTimer = new Timer(ReloadSecrets, null, TimeSpan.Zero, TimeSpan.FromMinutes(5));
    }

    private async void ReloadSecrets(object state)
    {
        try
        {
            var secretValue = await _secretsManager.GetSecretAsync(_secretName);
            var config = JsonSerializer.Deserialize&lt;Dictionary&lt;string, string&gt;&gt;(secretValue);
            
            Data.Clear();
            foreach (var kvp in config)
            {
                Data[kvp.Key] = kvp.Value;
            }
            
            OnReload();
        }
        catch (Exception ex)
        {
            // Log error but don't throw to avoid crashing the timer
            Console.WriteLine($&quot;Failed to reload secrets: {ex.Message}&quot;);
        }
    }

    public void Dispose()
    {
        _reloadTimer?.Dispose();
    }
}
</code></pre>
<h3>8.2 Multi-Region Failover</h3>
<pre><code class="language-csharp">public class MultiRegionSecretsManagerService : ISecretsManagerService
{
    private readonly List&lt;IAmazonSecretsManager&gt; _clients;
    private readonly ILogger&lt;MultiRegionSecretsManagerService&gt; _logger;

    public MultiRegionSecretsManagerService(
        IConfiguration configuration,
        ILogger&lt;MultiRegionSecretsManagerService&gt; logger)
    {
        _logger = logger;
        _clients = new List&lt;IAmazonSecretsManager&gt;();
        
        // Create clients for multiple regions
        var regions = new[] { &quot;us-east-1&quot;, &quot;us-west-2&quot;, &quot;eu-west-1&quot; };
        foreach (var region in regions)
        {
            var config = new AmazonSecretsManagerConfig
            {
                RegionEndpoint = RegionEndpoint.GetBySystemName(region)
            };
            _clients.Add(new AmazonSecretsManagerClient(config));
        }
    }

    public async Task&lt;string&gt; GetSecretAsync(string secretName)
    {
        Exception lastException = null;
        
        foreach (var client in _clients)
        {
            try
            {
                var request = new GetSecretValueRequest { SecretId = secretName };
                var response = await client.GetSecretValueAsync(request);
                
                _logger.LogInformation(&quot;Retrieved secret from region {Region}&quot;, 
                    client.Config.RegionEndpoint.SystemName);
                
                return response.SecretString;
            }
            catch (Exception ex)
            {
                lastException = ex;
                _logger.LogWarning(ex, &quot;Failed to retrieve secret from region {Region}&quot;, 
                    client.Config.RegionEndpoint.SystemName);
            }
        }
        
        throw new InvalidOperationException(
            &quot;Failed to retrieve secret from all regions&quot;, lastException);
    }
}
</code></pre>
<hr />
<h2>Conclusion</h2>
<p>AWS Secrets Manager integration with ABP Framework significantly enhances the security of your applications. With this integration:</p>
<p><strong>Centralized Secret Management</strong>: All secrets are managed centrally
<strong>Better Security</strong>: Encryption through KMS and access control through IAM
<strong>Audit Trail</strong>: Complete recording of who accessed which secret when
<strong>Automatic Rotation</strong>: Secrets can be rotated automatically
<strong>High Availability</strong>: AWS high availability guarantee
<strong>Easy Integration</strong>: Native integration with ABP Framework
<strong>Cost Effective</strong>: Pay only for what you use
<strong>Scalable</strong>: Scales with your application needs</p>
<p>With this post, you can securely utilize AWS Secrets Manager in your ABP Framework applications and bid farewell to secret management concerns in production.</p>
<h3>Key Benefits:</h3>
<ul>
<li><strong>Developer Productivity</strong>: No hardcoded secrets in config files</li>
<li><strong>Operational Excellence</strong>: Automation of rotation and monitoring</li>
<li><strong>Security Compliance</strong>: Meet enterprise security requirements</li>
<li><strong>Peace of Mind</strong>: Professional-grade secret management</li>
</ul>
<hr />
<h2>Additional Resources</h2>
<ul>
<li><a href="https://docs.aws.amazon.com/secretsmanager/">AWS Secrets Manager Documentation</a></li>
<li><a href="https://docs.abp.io/">ABP Framework Documentation</a></li>
<li><a href="https://docs.aws.amazon.com/sdk-for-net/">AWS SDK for .NET</a></li>
<li><a href="https://aws.amazon.com/architecture/security-identity-compliance/">AWS Security Best Practices</a></li>
<li><a href="https://github.com/fahrigedik/AWSIntegrationABP">Sample Project Repository</a></li>
</ul>
]]></content:encoded>
      <media:thumbnail url="https://abp.io/api/posts/cover-picture-source/3a1c5e73-468d-e8c4-4a03-91d6ec9c3450" />
      <media:content url="https://abp.io/api/posts/cover-picture-source/3a1c5e73-468d-e8c4-4a03-91d6ec9c3450" medium="image" />
    </item>
    <item>
      <guid isPermaLink="true">https://abp.io/community/posts/best-practices-for-designing-backward-compatible-rest-apis-in-a-microservice-solution-for-.net-developers-9rzlb4q6</guid>
      <link>https://abp.io/community/posts/best-practices-for-designing-backward-compatible-rest-apis-in-a-microservice-solution-for-.net-developers-9rzlb4q6</link>
      <a10:author>
        <a10:name>fahrigedik</a10:name>
        <a10:uri>https://abp.io/community/members/fahrigedik</a10:uri>
      </a10:author>
      <title>Best Practices for Designing Backward‑Compatible REST APIs in a Microservice Solution for .NET Developers</title>
      <description>This article provides a clear roadmap for preserving backward compatibility in .NET-based microservices—what counts as a “breaking” change and how to do versioning right. It covers add-only schema evolution, ProblemDetails/ETag, Pact-based contract testing, and risk-reduced migrations via canary/blue-green rollouts.</description>
      <pubDate>Thu, 28 Aug 2025 07:22:56 Z</pubDate>
      <a10:updated>2026-03-07T02:09:51Z</a10:updated>
      <content:encoded><![CDATA[<h1>Best Practices for Designing Backward‑Compatible REST APIs in a Microservice Solution for .NET Developers</h1>
<h2>Introduction</h2>
<p>With microservice architecture, each service develops and ships independently at its own pace, and clients infrequently update in lockstep. <strong>Backward compatibility</strong> means that when you release new versions, current consumers continue to function without changing code. This article provides a practical, 6–7 minute tutorial specific to <strong>.NET developers</strong>.</p>
<hr />
<h2>What Counts as “Breaking”? (and what doesn’t)</h2>
<p>A change is <strong>breaking</strong> if a client that previously conformed can <strong>fail at compile time or runtime</strong>, or exhibit <strong>different business‑critical behavior</strong>, <strong>without</strong> changing that client in any way. In other words: if an old client needs to be altered in order to continue functioning as it did, your change is breaking.</p>
<h3>Examples of breaking changes</h3>
<ul>
<li><strong>Deleting or renaming an endpoint</strong> or modifying its URL/route.</li>
<li><strong>Making an existing field required</strong> (e.g., requiring <code>address</code>).</li>
<li><strong>Data type or format changes</strong> (e.g., <code>price: string</code> → <code>price: number</code>, or date format changes).</li>
<li><strong>Altering default behavior or ordering</strong> that clients implicitly depend on (hidden contracts).</li>
<li><strong>Changing the error model</strong> or HTTP status codes in a manner that breaks pre-existing error handling.</li>
<li><strong>Renaming fields</strong> or <strong>making optional fields required</strong> in requests or responses.</li>
<li><strong>Reinterpreting semantics</strong> (e.g., <code>status=&quot;closed&quot;</code> formerly included archived items, but no longer does).</li>
</ul>
<h3>Examples of non‑breaking changes</h3>
<ul>
<li><strong>Optional fields or query parameters can be added</strong> (clients may disregard them).</li>
<li><strong>Adding new enum values</strong> (if the clients default to a safe behavior for unrecognized values).</li>
<li><strong>Adding a new endpoint</strong> while leaving the previous one unchanged.</li>
<li><strong>Performance enhancements</strong> that leave input/output unchanged.</li>
<li><strong>Including metadata</strong> (e.g., pagination links) without changing the current payload shape.</li>
</ul>
<blockquote>
<p>Golden rule: <strong>Old clients should continue to work exactly as they did before—without any changes.</strong></p>
</blockquote>
<hr />
<h2>Versioning Strategy</h2>
<p>Versioning is your master control lever for managing change. Typical methods:</p>
<ol>
<li><strong>URI Segment</strong> (simplest)</li>
</ol>
<pre><code>GET /api/v1/orders
GET /api/v2/orders
</code></pre>
<p>Pros: Cache/gateway‑friendly; explicit in docs. Cons: URL noise.</p>
<ol start="2">
<li><strong>Header‑Based</strong></li>
</ol>
<pre><code>GET /api/orders
x-api-version: 2.0
</code></pre>
<p>Pros: Clean URLs; multiple reader support. Cons: Needs proxy/CDN rules.</p>
<ol start="3">
<li><strong>Media Type</strong>
Accept: application/json;v=2</li>
</ol>
<p>Pros: Semantically accurate. <br> Cons: More complicated to test and implement. <br> <strong>Recommendation:</strong> For the majority of teams, favor <strong>URI segments</strong>, with an optional <strong><code>x-api-version</code></strong> header for flexibility.</p>
<h3>Quick Setup in ASP.NET Core (Asp.Versioning)</h3>
<pre><code class="language-csharp">// Program.cs
using Asp.Versioning;

builder.Services.AddControllers();
builder.Services.AddApiVersioning(o =&gt;
{
    o.DefaultApiVersion = new ApiVersion(1, 0);
    o.AssumeDefaultVersionWhenUnspecified = true;
    o.ReportApiVersions = true; // response header: api-supported-versions
    o.ApiVersionReader = ApiVersionReader.Combine(
        new UrlSegmentApiVersionReader(),
        new HeaderApiVersionReader(&quot;x-api-version&quot;)
    );
});

builder.Services.AddVersionedApiExplorer(o =&gt;
{
    o.GroupNameFormat = &quot;'v'VVV&quot;; // v1, v2
    o.SubstituteApiVersionInUrl = true;
});
</code></pre>
<pre><code class="language-csharp">// Controller
using Asp.Versioning;

[ApiController]
[Route(&quot;api/v{version:apiVersion}/orders&quot;)]
public class OrdersController : ControllerBase
{
    [HttpGet]
    [ApiVersion(&quot;1.0&quot;, Deprecated = true)]
    public IActionResult GetV1() =&gt; Ok(new { message = &quot;v1&quot; });

    [HttpGet]
    [MapToApiVersion(&quot;2.0&quot;)]
    public IActionResult GetV2() =&gt; Ok(new { message = &quot;v2&quot;, includes = new []{&quot;items&quot;} });
}
</code></pre>
<hr />
<h2>Schema Evolution Playbook (JSON &amp; DTO)</h2>
<p>Obey the following rules for compatibility‑safe evolution:</p>
<ul>
<li><strong>Add‑only changes</strong>: Favor adding <strong>optional</strong> fields; do not remove/rename fields.</li>
<li><strong>Maintain defaults</strong>: When the new field is disregarded, the old functionality must not change.</li>
<li><strong>Enum extension</strong>: Clients should handle unknown enum values gracefully (default behavior).</li>
<li><strong>Deprecation pipeline</strong>: Mark fields/endpoints as deprecated <strong>at least one version</strong> prior to removal and publicize extensively. - <strong>Stability by contract</strong>: Record any unspoken contracts (ordering, casing, formats) that clients depend on.</li>
</ul>
<h3>Example: adding a non‑breaking field</h3>
<pre><code class="language-csharp">public record OrderDto(
    Guid Id,
    decimal Total,
    string Currency,
    string? SalesChannel // new, optional
);
</code></pre>
<hr />
<h2>Compatibility‑Safe API Behaviors</h2>
<ul>
<li><strong>Error model</strong>: Use a standard structure (e.g., RFC 7807 <code>ProblemDetails</code>). Avoid ad‑hoc error shapes on a per-endpoint basis.</li>
<li><strong>Versioning/Deprecation communication</strong> through headers:</li>
<li><code>api-supported-versions: 1.0, 2.0</code></li>
<li><code>Deprecation: true</code> (in deprecated endpoints)</li>
<li><code>Sunset: Wed, 01 Oct 2025 00:00:00 GMT</code> (planned deprecation date)</li>
<li><strong>Idempotency</strong>: Use an <code>Idempotency-Key</code> header for retry-safe POSTs.</li>
<li><strong>Optimistic concurrency</strong>: Utilize <code>ETag</code>/<code>If-Match</code> to prevent lost updates.</li>
<li><strong>Pagination</strong>: Prefer cursor tokens (<code>nextPageToken</code>) to protect clients from sorting/index changes.</li>
<li><strong>Time</strong>: Employ ISO‑8601 in UTC; record time‑zone semantics and rounding conventions.</li>
</ul>
<hr />
<h2>Rollout &amp; Deprecation Policy</h2>
<p>A good deprecation policy is <strong>announce → coexist → remove</strong>:</p>
<ol>
<li><strong>Announce</strong>: Release changelog, docs, and comms (mail/Slack) with v2 information and the sunset date.</li>
<li><strong>Coexist</strong>: Operate v1 and v2 side by side. Employ gateway percentage routing for progressive cutover.</li>
<li><strong>Observability</strong>: Monitor errors/latency/usage <strong>by version</strong>. When v1 traffic falls below ~5%, plan for removal. 4) <strong>Remove</strong>: Post sunset date, return <strong>410 (Gone)</strong> with a link to migration documentation.</li>
</ol>
<p><strong>Canary &amp; Blue‑Green</strong>: Initialize v2 with a small traffic portion and compare error/latency budgets prior to scaling up.</p>
<hr />
<h2>Contract &amp; Compatibility Testing</h2>
<ul>
<li><strong>Consumer‑Driven Contracts</strong>: Write expectations using Pact.NET; verify at provider CI.</li>
<li><strong>Golden files / snapshots</strong>: Freeze representative JSON payloads and automatically detect regressions.</li>
<li><strong>Version-specific smoke tests</strong>: Maintain separate, minimal test suites for v1 and v2.</li>
<li><strong>SemVer discipline</strong>: Minor = backward‑compatible; Major = breaking (avoid when possible).</li>
</ul>
<p>Minimal example (xUnit + snapshot style):</p>
<pre><code class="language-csharp">[Fact]
public async Task Orders_v1_contract_should_match_snapshot()
{
    var resp = await _client.GetStringAsync(&quot;/api/v1/orders&quot;);
    Approvals.VerifyJson(resp); // snapshot comparison
}
</code></pre>
<hr />
<h2>Tooling &amp; Docs (for .NET)</h2>
<ul>
<li><strong>Asp.Versioning (NuGet)</strong>: API versioning + ApiExplorer integration.</li>
<li><strong>Swashbuckle / NSwag</strong>: Generate an OpenAPI definition <strong>for every version</strong> (<code>/swagger/v1/swagger.json</code>, <code>/swagger/v2/swagger.json</code>). Display both in Swagger UI.</li>
<li><strong>Polly</strong>: Client‑side retries/fallbacks to handle transient failures and ensure resilience.</li>
<li><strong>Serilog + OpenTelemetry</strong>: Collect metrics/logs/traces by version for observability and SLOs.</li>
</ul>
<p>Swagger UI configuration by group name:</p>
<pre><code class="language-csharp">app.UseSwagger();
app.UseSwaggerUI(c =&gt;
{
c.SwaggerEndpoint(&quot;/swagger/v1/swagger.json&quot;, &quot;API v1&quot;);
c.SwaggerEndpoint(&quot;/swagger/v2/swagger.json&quot;, &quot;API v2&quot;);
});
</code></pre>
<hr />
<h2>Conclusion</h2>
<p>Backward compatibility is not a version number—it is <strong>disciplined change management</strong>. When you use add‑only schema evolution, a well‑defined versioning strategy, strict contract testing, and rolling rollout, you maintain microservice independence and safeguard consumer experience.</p>
]]></content:encoded>
      <media:thumbnail url="https://abp.io/api/posts/cover-picture-source/3a1c01bb-bdd6-2da7-850d-9e24579f79c4" />
      <media:content url="https://abp.io/api/posts/cover-picture-source/3a1c01bb-bdd6-2da7-850d-9e24579f79c4" medium="image" />
    </item>
    <item>
      <guid isPermaLink="true">https://abp.io/community/posts/angular-application-builder-transitioning-from-webpack-to-esbuild-3yzhzfl0</guid>
      <link>https://abp.io/community/posts/angular-application-builder-transitioning-from-webpack-to-esbuild-3yzhzfl0</link>
      <a10:author>
        <a10:name>fahrigedik</a10:name>
        <a10:uri>https://abp.io/community/members/fahrigedik</a10:uri>
      </a10:author>
      <title>Angular Application Builder: Transitioning from Webpack to Esbuild</title>
      <description>We are going to compare here classical Webpack bundling, employed by Angular applications, with new Esbuild system. We are going to migrate step by step to new system.</description>
      <pubDate>Wed, 30 Jul 2025 06:44:02 Z</pubDate>
      <a10:updated>2026-03-07T04:06:24Z</a10:updated>
      <content:encoded><![CDATA[<h1>Angular Application Builder: Transitioning from Webpack to Esbuild</h1>
<h2>Introduction</h2>
<p>We are going to compare here classical Webpack bundling, employed by Angular applications, with new Esbuild system. We are going to migrate step by step to new system.</p>
<hr />
<h2>What is the Webpack Build System?</h2>
<p>It's a capable module bundler popular among web developers. It reduces a huge number of file formats, whether JavaScript, CSS, or even HTML, to a solitary output file. Angular long employed Webpack as its primary build system.</p>
<p>Why did Angular migrate away from Webpack? The reason are:</p>
<ul>
<li>Slow build times.</li>
<li>Complex configuration.</li>
<li>Increased build times as <code>node_modules</code> dependencies grow.</li>
</ul>
<hr />
<h2>What is Esbuild?</h2>
<p>Esbuild is a next-gen bundler that was specifically developed to bundle and compile JavaScript and TypeScript with incredible speed. It's developed with the programming language Go, and since it enjoys multi-core parallel processing, its speed is amplified tremendously.</p>
<hr />
<h2>Benchmark Tests: Esbuild vs Other Bundlers</h2>
<h3>JavaScript Benchmark</h3>
<p>In this benchmark, we bundle the three.js library ten times without any cache to create a fresh bundle.</p>
<p>| Bundler           | Time   | Relative slowdown | Absolute speed | Output size |
| ----------------- | ------ | ----------------- | -------------- | ----------- |
| esbuild           | 0.39s  | 1x                | 1403.7 kloc/s  | 5.80mb      |
| parcel 2          | 14.91s | 38x               | 36.7 kloc/s    | 5.78mb      |
| rollup 4 + terser | 34.10s | 87x               | 16.1 kloc/s    | 5.82mb      |
| webpack 5         | 41.21s | 106x              | 13.3 kloc/s    | 5.84mb      |</p>
<h3>TypeScript Benchmark</h3>
<p>This benchmark uses the Rome codebase to simulate bundling a large TypeScript project.</p>
<p>| Bundler   | Time   | Relative slowdown | Absolute speed | Output size |
| --------- | ------ | ----------------- | -------------- | ----------- |
| esbuild   | 0.10s  | 1x                | 1318.4 kloc/s  | 0.97mb      |
| parcel 2  | 6.91s  | 69x               | 16.1 kloc/s    | 0.96mb      |
| webpack 5 | 16.69s | 167x              | 8.3 kloc/s     | 1.27mb      |</p>
<hr />
<h2>Application Builder in Angular</h2>
<p>Since Angular 17 and later, Angular's build system was rewritten to a large extent. The new build system uses the <code>application</code> builder, with options to help faster, quicker builds. The new builder:</p>
<ul>
<li>Offers simplified configuration files.</li>
<li>Significantly reduces build times thanks to Esbuild.</li>
<li>Integrates modern development needs such as SSR, prerendering, and HMR.</li>
<li>Is enabled by default in new projects.</li>
</ul>
<hr />
<h2>Migrating to the New Build System in Angular</h2>
<p>Angular 17 and up comes with a new build system. You can migrate below versions of Angular with an automated tool. There's an automated schematics script from the Angular team to achieve that scenario.</p>
<h3>Automated Migration (Recommended)</h3>
<p>The automatic migration will remove Webpack-specific code and modify <code>angular.json</code> along with other relevant files.</p>
<pre><code class="language-bash">ng update @angular/cli --name use-application-builder
</code></pre>
<h4>Steps performed by migration:</h4>
<ul>
<li><strong>Builder conversion:</strong> Modifies existing old <code>browser</code> or <code>application</code> targets to new <code>application</code> builder.</li>
<li><strong>SSR Builders Removal:</strong> Deletes all existing SSR builders since the new <code>application</code> builder already includes SSR support.</li>
<li><strong>Configuration Updates:</strong> Update config files (<code>angular.json</code>) to be compatible with new system.</li>
<li><strong>Configuration Merge for TypeScript:</strong> Merges <code>tsconfig.server.json</code> and <code>tsconfig.app.json</code> with option to add option to add `&quot;</li>
<li><strong>Updates to Server Code:</strong> Update server-side code (Node.js/Express) to correspond to new bootstrapping and directory structure.</li>
<li><strong>Strip Webpack-Specific Styles:</strong> Removes Webpack-specific syntax (<code>^</code>, <code>~</code>) from style imports and reorganizes stylesheets.</li>
<li><strong>Dependency Optimisation:</strong> If <code>-angular-devkit/build-angular</code> or <code>-angular-devkit/angular-common</code> are not being used elsewhere, then it's substituted by a smaller and optim</li>
</ul>
<hr />
<h3>Manual Migration</h3>
<p>For existing Angular projects, manual migration to the new build system is available with two stable and supported options.</p>
<hr />
<h3>🔹 1. <code>browser-esbuild</code> Builder (Compatible Migration)</h3>
<ul>
<li>Generates only the client-side bundle.</li>
<li>Designed for compatibility with the old <code>browser</code> builder.</li>
<li>Minimal breaking changes.</li>
</ul>
<h4>Example <code>angular.json</code> Modification:</h4>
<pre><code class="language-json">&quot;architect&quot;: {
  &quot;build&quot;: {
    &quot;builder&quot;: &quot;@angular-devkit/build-angular:browser&quot;
</code></pre>
<p>⬇️ Change to:</p>
<pre><code class="language-json">&quot;architect&quot;: {
  &quot;build&quot;: {
    &quot;builder&quot;: &quot;@angular-devkit/build-angular:browser-esbuild&quot;
</code></pre>
<hr />
<h3>🔹 2. <code>application</code> Builder (Full Migration)</h3>
<ul>
<li>Provides client-side and optional server-side rendering (SSR) support.</li>
<li>Includes advanced features like build-time prerendering.</li>
<li>Default builder for new Angular CLI projects.</li>
<li>Recommended for future SSR implementations.</li>
</ul>
<h4>Example <code>angular.json</code> Modification:</h4>
<pre><code class="language-json">&quot;architect&quot;: {
  &quot;build&quot;: {
    &quot;builder&quot;: &quot;@angular-devkit/build-angular:browser&quot;
</code></pre>
<p>⬇️ Change to:</p>
<pre><code class="language-json">&quot;architect&quot;: {
  &quot;build&quot;: {
    &quot;builder&quot;: &quot;@angular-devkit/build-angular:application&quot;
</code></pre>
<h4>Additional Required Updates:</h4>
<p>| Old Setting           | New Setting / Explanation              |
| --------------------- | -------------------------------------- |
| <code>main</code>                | Rename to <code>browser</code>.                   |
| <code>polyfills</code>           | Should be an array, not a single file. |
| <code>buildOptimizer</code>      | Remove (handled by <code>optimization</code>).    |
| <code>resourcesOutputPath</code> | Remove (default is now <code>media</code>).       |
| <code>vendorChunk</code>         | Remove (no longer needed).             |
| <code>commonChunk</code>         | Remove (no longer needed).             |
| <code>deployUrl</code>           | Remove (use <code>&lt;base href&gt;</code> instead).    |
| <code>ngswConfigPath</code>      | Rename to <code>serviceWorker</code>.             |</p>
<hr />
<h3>For SSR-Enabled Applications</h3>
<ul>
<li>The original SSR builders were distinct (<code>prerender</code>, <code>server</code>, <code>server-dev</code>, <code>app-shell</code>) and are consolidated into one <code>application</code></li>
<li><code>ng update</code> deletes old configurations and sets up a new <code>@angular/ssr</code> package.</li>
<li>Server and code configurations are automatically adapted to a new system.</li>
<li>It works with both <code>application</code> and <code>browser-esbuild</code> builders.</li>
</ul>
<hr />
<h2>Conclusion</h2>
<p>The new build system upgrade to Angular signifies an upgrade, bringing existing web development needs up to date by trading slow and cumbersome Webpack's infrastructure with succinctness and speed from Esbuild.</p>
]]></content:encoded>
      <media:thumbnail url="https://abp.io/api/posts/cover-picture-source/3a1b6c3f-b5a9-344d-8387-566aeac9d3d5" />
      <media:content url="https://abp.io/api/posts/cover-picture-source/3a1b6c3f-b5a9-344d-8387-566aeac9d3d5" medium="image" />
    </item>
    <item>
      <guid isPermaLink="true">https://abp.io/community/posts/a-modern-approach-to-angular-dependency-injection-using-inject-function-8np4o1ap</guid>
      <link>https://abp.io/community/posts/a-modern-approach-to-angular-dependency-injection-using-inject-function-8np4o1ap</link>
      <a10:author>
        <a10:name>fahrigedik</a10:name>
        <a10:uri>https://abp.io/community/members/fahrigedik</a10:uri>
      </a10:author>
      <title>A Modern Approach to Angular Dependency Injection using inject function</title>
      <description>This article explores Angular’s modern inject() function, introduced in Angular 14, as a cleaner alternative to traditional constructor-based dependency injection. It enables injecting services outside of classes—such as in standalone functions or composable utilities—making code more modular and readable.</description>
      <pubDate>Fri, 25 Jul 2025 06:07:48 Z</pubDate>
      <a10:updated>2026-03-07T03:06:42Z</a10:updated>
      <content:encoded><![CDATA[<h1>A Modern Approach to Angular Dependency Injection using <code>inject</code></h1>
<h2>Introduction</h2>
<p>Dependency Injection (DI) enables developers to build sustainable Angular applications. The <code>inject</code> function which arrived with Angular 14 enables developers to handle dependencies through a simpler and type-safe mechanism. This article presents our findings together with the advantages of this approach. The approach demonstrated in this article will benefit your Angular projects regardless.</p>
<h2>Problem</h2>
<p>The traditional method of dependency injection in Angular requires developers to specify dependencies through constructor methods. The method of dependency specification results in complex code that makes applications harder to understand and maintain as they expand. The development of extensive Angular applications with intricate components and multiple dependencies makes this issue more severe.</p>
<h2>Solutions</h2>
<p>The Angular team has developed a CLI schematics command that automatically migrates existing projects.</p>
<pre><code class="language-ps">ng generate @angular/core:inject
</code></pre>
<h3>Tradinational Dependency Injection</h3>
<pre><code class="language-ts">@Component({
  selector: 'app-example',
  templateUrl: './example.component.html',
})
export class ExampleComponent {
  constructor(private exampleService: ExampleService) {}
}
</code></pre>
<p>This approach is familiar to most angular developers. If too many dependencies are injected with the constructor method and there are too many operations in the constructor method, it leads to complexity. Manageability and readability become difficult</p>
<h3>The New Dependency Injection</h3>
<pre><code class="language-ts">@Component({
  selector: 'app-example',
  templateUrl: './example.component.html',
})
export class ExampleComponent {
  private exampleService = inject(ExampleService);
}
</code></pre>
<p>It's a safer, cleaner, and easier-to-read replacement for the traditional constructor-based method that makes it easier for developers to keep their code better organized and consistent. By leveraging the inject function, you reduce boilerplate code, eliminate mistakes, and simplify testing. Your dependencies are simpler to maintain and manage, especially in larger applications where readability and maintainability are crucial to scale and introduce new team members effectively.</p>
<h3>Why We Chose the <code>inject</code> Approach</h3>
<p>The <code>inject</code> method have to became your choice for clean code because it made the code more readable and maintainable along with more developer-friendly. The method decreases boilerplate code and follows modern Angular development principles.</p>
<h3>Implementation</h3>
<p>You can use the automatic CLI schematic command for the migration or the following example demonstrates how to use manually <code>inject</code> in an Angular application.</p>
<p><strong>Step 1: Import inject</strong></p>
<pre><code class="language-ts">import { Component, inject } from '@angular/core';
import { ExampleService } from './example.service';

@Component({
  selector: 'app-example',
  templateUrl: './example.component.html',
})
export class ExampleComponent {
  private exampleService = inject(ExampleService);

  exampleMethod() {
   this.exampleService.doSomething();
  }
}
</code></pre>
<p><strong>Step 2: Using inject in Services</strong></p>
<pre><code class="language-ts">import { Injectable, inject } from '@angular/core';
import { HttpClient } from '@angular/common/http';

@Injectable({ providedIn: 'root' })
  export class ExampleService {
  private http = inject(HttpClient);

  doSomething() {
  return this.http.get('/api/example');
  }
}
</code></pre>
<h2>Conclusion</h2>
<p>The new <code>inject</code> function provided by Angular makes your code more maintainable through better dependency management. The <code>inject</code> approach benefits both standalone Angular applications by improving their architectural structure and developer productivity.</p>
<p>You can find additional information about Dependency Injection in Angular through their official documentation.</p>
]]></content:encoded>
      <media:thumbnail url="https://abp.io/api/posts/cover-picture-source/3a1b525e-bee4-49fd-45d6-a48f40d77d96" />
      <media:content url="https://abp.io/api/posts/cover-picture-source/3a1b525e-bee4-49fd-45d6-a48f40d77d96" medium="image" />
    </item>
  </channel>
</rss>