Open Closed

DistributedCache ApplicationConfigurationDto Time Out #10232


User avatar
0
heshengli created
  • Exception message and full stack trace:
  • Steps to reproduce the issue: [2025-12-18 14:00:02.801 +08:00] [WRN] [Volo.Abp.Caching.DistributedCache] Timeout awaiting response (outbound=1KiB, inbound=0KiB, 5502ms elapsed, timeout is 5000ms), command=HMGET, next: HMGET c:Volo.Abp.AspNetCore.Mvc.ApplicationConfigurations.ApplicationConfigurationDto,k:vSkysoft:ApplicationConfiguration_77ea1cc56a3e9b7aecee3a15e9cea1f5_zh-Hans, inst: 0, qu: 0, qs: 8, aw: False, bw: SpinningDown, rs: ReadAsync, ws: Idle, in: 87304, in-pipe: 0, out-pipe: 0, last-in: 0, cur-in: 0, sync-ops: 18, async-ops: 4395440, serverEndpoint: 10.166.0.97:6379, conn-sec: 770.75, aoc: 0, mc: 1/1/0, mgr: 10 of 10 available, clientName: 2623c49bb095(SE.Redis-v2.7.27.49176), IOCP: (Busy=0,Free=1000,Min=1,Max=1000), WORKER: (Busy=99,Free=32668,Min=16,Max=32767), POOL: (Threads=99,QueuedItems=30,CompletedItems=89914679,Timers=26), v: 2.7.27.49176 (Please take a look at this article for some common client-side issues that can cause timeouts: https://stackexchange.github.io/StackExchange.Redis/Timeouts) StackExchange.Redis.RedisTimeoutException: Timeout awaiting response (outbound=1KiB, inbound=0KiB, 5502ms elapsed, timeout is 5000ms), command=HMGET, next: HMGET c:Volo.Abp.AspNetCore.Mvc.ApplicationConfigurations.ApplicationConfigurationDto,k:vSkysoft:ApplicationConfiguration_77ea1cc56a3e9b7aecee3a15e9cea1f5_zh-Hans, inst: 0, qu: 0, qs: 8, aw: False, bw: SpinningDown, rs: ReadAsync, ws: Idle, in: 87304, in-pipe: 0, out-pipe: 0, last-in: 0, cur-in: 0, sync-ops: 18, async-ops: 4395440, serverEndpoint: 10.166.0.97:6379, conn-sec: 770.75, aoc: 0, mc: 1/1/0, mgr: 10 of 10 available, clientName: 2623c49bb095(SE.Redis-v2.7.27.49176), IOCP: (Busy=0,Free=1000,Min=1,Max=1000), WORKER: (Busy=99,Free=32668,Min=16,Max=32767), POOL: (Threads=99,QueuedItems=30,CompletedItems=89914679,Timers=26), v: 2.7.27.49176 (Please take a look at this article for some common client-side issues that can cause timeouts: https://stackexchange.github.io/StackExchange.Redis/Timeouts) at Microsoft.Extensions.Caching.StackExchangeRedis.RedisCache.GetAndRefreshAsync(String key, Boolean getData, CancellationToken token) at Microsoft.Extensions.Caching.StackExchangeRedis.RedisCache.GetAsync(String key, CancellationToken token) at Volo.Abp.Caching.DistributedCache2.GetAsync(TCacheKey key, Nullable1 hideErrors, Boolean considerUow, CancellationToken token) [2025-12-18 14:00:02.801 +08:00] [WRN] [Volo.Abp.Caching.DistributedCache] ---------- Exception Data ---------- Redis-Message = HMGET c:Volo.Abp.AspNetCore.Mvc.ApplicationConfigurations.ApplicationConfigurationDto,k:vSkysoft:ApplicationConfiguration_Anonymous_zh-Hans Redis-Message-Next = HMGET c:Volo.Abp.AspNetCore.Mvc.ApplicationConfigurations.ApplicationConfigurationDto,k:vSkysoft:ApplicationConfiguration_77ea1cc56a3e9b7aecee3a15e9cea1f5_zh-Hans Redis-OpsSinceLastHeartbeat = 0 Redis-Queue-Awaiting-Write = 0 Redis-Queue-Awaiting-Response = 8 Redis-Active-Writer = False Redis-Backlog-Writer = SpinningDown Redis-Read-State = ReadAsync Redis-Write-State = Idle Redis-Inbound-Bytes = 87304 Redis-Inbound-Pipe-Bytes = 0 Redis-Outbound-Pipe-Bytes = 0 Redis-Last-Result-Bytes = 0 Redis-Inbound-Buffer-Bytes = 0 Redis-Sync-Ops = 18 Redis-Async-Ops = 4395440 Redis-Server-Endpoint = 10.166.0.97:6379 Redis-Server-Connected-Seconds = 770.75 Redis-Abort-On-Connect = 0 Redis-Multiplexer-Connects = 1/1/0 Redis-Manager = 10 of 10 available Redis-Client-Name = 2623c49bb095(SE.Redis-v2.7.27.49176) Redis-ThreadPool-IO-Completion = (Busy=0,Free=1000,Min=1,Max=1000) Redis-ThreadPool-Workers = (Busy=99,Free=32668,Min=16,Max=32767) Redis-ThreadPool-Items = (Threads=99,QueuedItems=30,CompletedItems=89914679,Timers=26) Redis-Busy-Workers = 99 Redis-Version = 2.7.27.49176 redis-command = HMGET c:Volo.Abp.AspNetCore.Mvc.ApplicationConfigurations.ApplicationConfigurationDto,k:vSkysoft:ApplicationConfiguration_Anonymous_zh-Hans request-sent-status = Sent redis-server = 10.166.0.97:6379

3 Answer(s)
  • User Avatar
    0
    AI-Bot created
    Support Team ABP Agent
    1. Solution Your log shows StackExchange.Redis timing out on HMGET while ABP is trying to read the cached ApplicationConfigurationDto (the /api/abp/application-configuration pipeline). These timeouts are almost always infrastructure/configuration issues around Redis connectivity, load, or client timeout settings rather than ABP code defects. Apply the following, in order:
    • Verify Redis health and networking

      • Ensure the Redis endpoint is reachable from the app containers/VMs (latency, packet loss, firewall/NAT).
      • Check Redis CPU, slowlog, and INFO stats for command latency or blocking operations.
      • If running in containers/K8s, confirm there’s no DNS flap or intermittent network policy drops.
    • Increase StackExchange.Redis timeouts conservatively

      • In appsettings.json (or environment), add a Redis configuration string with larger timeouts: { "Redis": { "Configuration": "10.166.0.97:6379,abortConnect=false,connectRetry=5,connectTimeout=10000,syncTimeout=10000,responseTimeout=10000,keepAlive=60,allowAdmin=false" } } Notes:
      • responseTimeout (v2.6+) is preferred; syncTimeout applies to some paths. Keep both aligned.
      • connectRetry and connectTimeout help on transient network issues.
      • keepAlive helps long-lived connections in some networks.
    • Use ABP’s Redis integration package and set a key prefix

      • Ensure the Web (and Auth/Tiered) projects reference:
        • Volo.Abp.Caching.StackExchangeRedis
      • Configure distributed cache and a clear key prefix to avoid collisions: public override void ConfigureServices(ServiceConfigurationContext context) { Configure<AbpDistributedCacheOptions>(options => { options.KeyPrefix = "MyApp:"; // important in multi-app/shared-redis }); }
    • Consider lowering load from Application Configuration

      • ApplicationConfigurationDto is read on app start for every client. If you have many concurrent clients or large DTO customizations:
        • Enable sliding/absolute expiration on the cache for that DTO to reduce Redis pressure: Configure<AbpDistributedCacheOptions>(options => { options.GlobalCacheEntryOptions.SlidingExpiration = TimeSpan.FromMinutes(10); });
        • Avoid heavy custom logic inside your overridden AbpApplicationConfigurationAppService.GetAsync that triggers multiple DB or cache calls per request; pre-aggregate, cache sub-parts, or compute once and cache with a reasonable TTL.
        • If you added custom data (like module permission maps), cache them separately with GetOrAddAsync and short TTLs, then compose the DTO from cached pieces.
    • Prevent cache operations from being cancelled by request aborts

      • If frontends sometimes cancel requests (navigation, timeouts), that cancellation token may bubble into Redis calls. Consider isolating cancellation tokens for cache access: public class NoCancelTokenProvider : ICancellationTokenProvider, ISingletonDependency { public CancellationToken Token => CancellationToken.None; public IDisposable Use(CancellationToken cancellationToken) => NullDisposable.Instance; private sealed class NullDisposable : IDisposable { public static readonly NullDisposable Instance = new(); public void Dispose() { } } }
      • Or wrap hot cache calls with CancellationToken.None explicitly in your own logic where safe.
    • Check thread pool starvation and blocking code

      • Your log shows WORKER Busy=99 and queued items, which can exacerbate Redis timeouts under load.
      • Audit for synchronous blocking (Result/Wait) on async methods, long GC pauses, or CPU saturation.
      • Ensure ASP.NET Core ThreadPool Min/Max are not overly constrained.
    • Use connection multiplexing correctly across tiered services

      • Every tier (AuthServer, API Host, Blazor Server) should point to the same Redis with the same configuration and not create unnecessary separate multiplexer instances per scope. The default DI registrations already share a singleton; avoid manual new ConnectionMultiplexer per request.
    • As a last resort, raise the retry budget and use resilient Redis

      • If the environment is noisy, consider Redis Cluster/Replica, co-locate app and Redis to reduce latency, or provision higher SKU (cloud) to reduce server-side stalls.

    Example minimal configuration in WebModule: [DependsOn(typeof(AbpCachingStackExchangeRedisModule))] public class MyWebModule : AbpModule { public override void ConfigureServices(ServiceConfigurationContext context) { Configure<AbpDistributedCacheOptions>(options => { options.KeyPrefix = "MyApp:"; options.GlobalCacheEntryOptions.SlidingExpiration = TimeSpan.FromMinutes(10); }); } }

    1. Sources
    • StackExchange.Redis client timeout causes and mitigations:
      • https://stackexchange.github.io/StackExchange.Redis/Timeouts
    • ABP Distributed Cache and Redis integration source:
      • https://github.com/abpframework/abp/blob/dev/framework/src/Volo.Abp.Caching/Volo/Abp/Caching/DistributedCache.cs

    This response was generated by AI. If it didn’t solve your problem, feel free to post a new answer — our support team will jump in to help you soon.

  • User Avatar
    0
    heshengli created

    这个应用,有很多人使用,且页面长期打开使用不关闭

  • User Avatar
    0
    maliming created
    Support Team Fullstack Developer

    你好

    你可以将超时时间调大一些试试呢. 这大概率是Redis服务器无法处理导致超时

Boost Your Development
ABP Live Training
Packages
See Trainings
Mastering ABP Framework Book
The Official Guide
Mastering
ABP Framework
Learn More
Mastering ABP Framework Book
Made with ❤️ on ABP v10.1.0-preview. Updated on December 17, 2025, 07:08
1
ABP Assistant
🔐 You need to be logged in to use the chatbot. Please log in first.