There is currently no ABP‑specific or Mapperly‑level “global switch” that makes Mapperly behave like AutoMapper regarding null source / navigation properties. Mapperly’s behavior is intentionally strict and driven by C# nullability. So:
Given that, for a large ABP application the realistic options are:
ABP v10 supports using AutoMapper and Mapperly side by side. For a big system that heavily relied on AutoMapper’s implicit null handling, the most practical migration path is:
[DependsOn(
typeof(AbpAutoMapperModule),
// other modules...
)]
public class MyProjectApplicationModule : AbpModule
{
public override void ConfigureServices(ServiceConfigurationContext context)
{
context.Services.AddAutoMapperObjectMapper<MyProjectApplicationModule>();
Configure<AbpAutoMapperOptions>(options =>
{
options.AddMaps<MyProjectApplicationModule>(validate: true);
});
}
}
Profile classes and CreateMap<,>() configurations. AutoMapper will keep handling nulls the way it always has.[DependsOn(
typeof(AbpMapperlyModule),
// ...
)]
public class MyNewModule : AbpModule
{
public override void ConfigureServices(ServiceConfigurationContext context)
{
context.Services.AddMapperlyObjectMapper<MyNewModule>();
}
}
ObjectMapperContext to select which mapper is used
public class LegacyAppService : ApplicationService
{
public LegacyAppService()
{
ObjectMapperContext = typeof(MyProjectApplicationModule); // AutoMapper context
}
}
public class NewAppService : ApplicationService
{
public NewAppService()
{
ObjectMapperContext = typeof(MyNewModule); // Mapperly context
}
}
This allows you to migrate incrementally instead of forcing the whole codebase to Mapperly’s strict rules at once.
For the parts you do want to move fully to Mapperly, the strategy is:
Mapperly trusts your C# nullability annotations:
public class Order
{
public Customer? Customer { get; set; } // nullable if optional
}
public class OrderDto
{
public string? CustomerName { get; set; } // nullable if Customer may be null
}
Run with nullable reference types enabled and clean up warnings on your mapping types; this drastically reduces Mapperly‑induced NREs.
Where you previously relied on AutoMapper’s “it just works” behavior for nested paths, switch to null‑safe custom logic instead of direct property flattening.
Example:
[Mapper]
public partial class OrderMapper : MapperBase<Order, OrderDto>
{
public override partial OrderDto Map(Order source);
public override partial void Map(Order source, OrderDto destination);
// Custom null-safe logic
partial void Map(Order source, OrderDto destination)
{
destination.CustomerName = source.Customer?.Name ?? string.Empty;
}
}
or use an AfterMap pattern in a partial method—key idea is: do the ?. and ?? yourself rather than relying on Mapperly to silently ignore nulls.
Ensure your module is correctly configured for Mapperly so that ObjectMapper.Map<,>() goes through Mapperly for that context:
using Volo.Abp.Mapperly;
[DependsOn(typeof(AbpMapperlyModule))]
public class MyMapperlyApplicationModule : AbpModule
{
public override void ConfigureServices(ServiceConfigurationContext context)
{
// default Mapperly object mapper
context.Services.AddMapperlyObjectMapper();
// optional: contextual Mapperly mapper for this module
context.Services.AddMapperlyObjectMapper<MyMapperlyApplicationModule>();
}
}
Then, in your service:
public class MyAppService : ApplicationService
{
public MyAppService()
{
ObjectMapperContext = typeof(MyMapperlyApplicationModule);
}
public Task<OrderDto> GetAsync(Order order)
{
var dto = ObjectMapper.Map<Order, OrderDto>(order); // Mapperly-based
return Task.FromResult(dto);
}
}
For the crashes you’re seeing now:
Customer.Name -> CustomerName, Parent.Child.GrandChild.Prop, etc.).string.Empty) through custom code.This is mechanical but can be scripted/assisted by IDE analyzers focusing on:
Does ABP/Mapperly provide a global option to mimic AutoMapper’s null behavior?
→ No. There is no framework‑level or Mapperly‑level setting that will:
Recommended migration strategy for large apps:
ObjectMapperContext and AbpMapperlyModule properly per module.ABP migration guide – AutoMapper → Mapperly (concepts, coexistence, registration):
https://abp.io/docs/10.0/release-info/migration-guides/AutoMapper-To-Mapperly
ABP object‑to‑object mapping & Mapperly integration (MapperBase, configuration, ObjectMapper):
https://abp.io/docs/latest/framework/infrastructure/object-to-object-mapping
Mapperly mapper configuration & nullability (general behavior; no global ignore‑null flag):
https://mapperly.riok.app/docs/configuration/mapper/
https://mapperly.riok.app/docs/configuration/flattening/
This response was generated by AI. If it didn’t solve your problem, feel free to post a new answer — our support team will jump in to help you soon.
ABP does not provide a built‑in “synchronize / upsert and delete obsolete” helper for permission grants, but it is straightforward to implement on top of the existing permission management APIs.
The recommended approach for seeding permissions is:
IDataSeedContributor.IPermissionDefinitionManager (and/or your own configuration) to know which permissions should be granted.IPermissionGrantRepository (or IPermissionGrantStore / IPermissionManager) to:
Below is an example that performs full synchronization (“database must equal seed definition”) for a given role.
Typically, you maintain the desired permissions in code (constants, config, etc.):
public static class MyPermissionSeed
{
// Permission names; these should exist as defined permissions
public static readonly string[] AdminRolePermissions =
{
"MyApp.Permissions.UserManagement",
"MyApp.Permissions.Reports.View",
"MyApp.Permissions.Reports.Edit",
// ...
};
public const string ProviderName = "R"; // "R" = Role (same as RolePermissionValueProvider.ProviderName)
public const string AdminRoleName = "admin"; // or your own role name
}
Note: For role permission provider name, ABP uses
RolePermissionValueProvider.ProviderName(usually"R").
IDataSeedContributorThis contributor will be executed by your DbMigrator and will:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Volo.Abp.Data;
using Volo.Abp.DependencyInjection;
using Volo.Abp.PermissionManagement;
using Volo.Abp.Uow;
using Volo.Abp.Users;
using Volo.Abp.MultiTenancy;
public class PermissionSyncDataSeedContributor :
IDataSeedContributor,
ITransientDependency
{
private readonly IPermissionGrantRepository _permissionGrantRepository;
private readonly IUnitOfWorkManager _unitOfWorkManager;
private readonly IPermissionDefinitionManager _permissionDefinitionManager;
private readonly ICurrentTenant _currentTenant;
private readonly IRoleIdLookupService _roleIdLookupService;
// You can implement this or just use IdentityRoleManager directly.
public PermissionSyncDataSeedContributor(
IPermissionGrantRepository permissionGrantRepository,
IUnitOfWorkManager unitOfWorkManager,
IPermissionDefinitionManager permissionDefinitionManager,
ICurrentTenant currentTenant,
IRoleIdLookupService roleIdLookupService)
{
_permissionGrantRepository = permissionGrantRepository;
_unitOfWorkManager = unitOfWorkManager;
_permissionDefinitionManager = permissionDefinitionManager;
_currentTenant = currentTenant;
_roleIdLookupService = roleIdLookupService;
}
public async Task SeedAsync(DataSeedContext context)
{
// If you have multi-tenancy, you may want to loop tenants.
using var uow = _unitOfWorkManager.Begin(requiresNew: true, isTransactional: true);
// Resolve the role id (provider key) for the admin role
var adminRoleId = await _roleIdLookupService.GetRoleIdByNameAsync(
MyPermissionSeed.AdminRoleName,
context.TenantId);
if (adminRoleId == null)
{
await uow.CompleteAsync();
return; // role not found; nothing to do
}
var providerName = MyPermissionSeed.ProviderName;
var providerKey = adminRoleId.ToString();
// 1) Desired set from seed (filter out undefined permissions just in case)
var allPermissionDefinitions = await _permissionDefinitionManager.GetPermissionsAsync();
var validPermissionNames = new HashSet<string>(
allPermissionDefinitions.Select(p => p.Name)
);
var desiredPermissionNames = MyPermissionSeed.AdminRolePermissions
.Where(validPermissionNames.Contains)
.ToHashSet();
// 2) Load current grants for this provider (role) & tenant
var currentGrants = await _permissionGrantRepository.GetListAsync(
name: null,
providerName: providerName,
providerKey: providerKey,
tenantId: context.TenantId
);
var currentPermissionNames = currentGrants
.Select(g => g.Name)
.ToHashSet();
// 3) Determine which to add and which to delete
var toAdd = desiredPermissionNames.Except(currentPermissionNames).ToList();
var toDelete = currentPermissionNames.Except(desiredPermissionNames).ToList();
// 4) Insert missing
foreach (var permissionName in toAdd)
{
await _permissionGrantRepository.InsertAsync(
new PermissionGrant(
GuidGenerator.Create(),
permissionName,
providerName,
providerKey,
context.TenantId
),
autoSave: false
);
}
// 5) Delete obsolete
if (toDelete.Any())
{
foreach (var permissionName in toDelete)
{
var grant = currentGrants.FirstOrDefault(g => g.Name == permissionName);
if (grant != null)
{
await _permissionGrantRepository.DeleteAsync(grant, autoSave: false);
}
}
}
await uow.CompleteAsync();
}
}
Important points:
PermissionGrant entities for the toAdd list.toDelete and remove those grants.DataSeedContext.TenantId to seed per tenant.SeedAsync (or the inner logic) for each.You can use IdentityRoleManager (from Volo.Abp.Identity) to find the role id, wrapped in a small service:
using System;
using System.Threading.Tasks;
using Volo.Abp.Identity;
public interface IRoleIdLookupService
{
Task<Guid?> GetRoleIdByNameAsync(string roleName, Guid? tenantId);
}
public class RoleIdLookupService : IRoleIdLookupService
{
private readonly IdentityRoleManager _roleManager;
public RoleIdLookupService(IdentityRoleManager roleManager)
{
_roleManager = roleManager;
}
public async Task<Guid?> GetRoleIdByNameAsync(string roleName, Guid? tenantId)
{
var role = await _roleManager.FindByNameAsync(roleName);
return role?.Id;
}
}
You can simplify this and inject IdentityRoleManager directly into the seed contributor if you prefer.
PermissionSyncDataSeedContributor in a Domain project (e.g., *.Domain).*.Application.Contracts or *.Domain.Shared for definitions).Data seeding infrastructure and IDataSeedContributor in ABP:
https://abp.io/docs/latest/framework/infrastructure/data-seeding
ABP’s own permission seeding pattern (PermissionDataSeedContributor):
https://github.com/abpframework/abp/blob/e3e1779de6df5d26f01cdc8e99ac9cbcb3d24d3c/modules/permission-management/src/Volo.Abp.PermissionManagement.Domain/Volo/Abp/PermissionManagement/PermissionDataSeedContributor.cs#L26
This response was generated by AI. If it didn’t solve your problem, feel free to post a new answer — our support team will jump in to help you soon.
In your current state:
https://auth.aaa.com/invalid_token, issuer is invalid)You must make issuer and authority 100% consistent everywhere, and in Kubernetes you typically also need to explicitly set ValidIssuers (and sometimes a SignatureValidator) in each service.
Do the following steps exactly.
In your AuthServer project module:
public override void PreConfigureServices(ServiceConfigurationContext context)
{
var configuration = context.Services.GetConfiguration();
PreConfigure<OpenIddictServerBuilder>(builder =>
{
// Must be the public URL of your AuthServer (the same URL you use from browser)
builder.SetIssuer(new Uri(configuration["AuthServer:Authority"]!));
});
}
And in AuthServer appsettings.Production.json or environment variables (in k8s):
"AuthServer": {
"Authority": "https://auth.aaa.com", // no trailing slash; code adds it
"RequireHttpsMetadata": "true"
}
Important:
https://auth.aaa.com/.well-known/openid-configuration must be reachable from inside the cluster and from browsers.ValidIssuersIn every project that validates tokens (Angular backend, web app, web gateway, all microservices), configure authentication like this in the module:
using Microsoft.AspNetCore.Authentication.JwtBearer;
using Microsoft.IdentityModel.Tokens;
using Volo.Abp;
using Volo.Abp.Modularity;
public class MyServiceHttpApiHostModule : AbpModule
{
public override void ConfigureServices(ServiceConfigurationContext context)
{
var configuration = context.Services.GetConfiguration();
context.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddAbpJwtBearer(options =>
{
// Must be the same public URL as in AuthServer
options.Authority = configuration["AuthServer:Authority"]; // "https://auth.aaa.com"
options.RequireHttpsMetadata = true;
// Audience must match 'aud' claim in the token for this service
options.Audience = "MyServiceName";
// FIX: IDX10204 / invalid issuer in k8s
options.TokenValidationParameters = new TokenValidationParameters
{
ValidIssuers = new[]
{
configuration["AuthServer:Authority"]!.EnsureEndsWith('/') // "https://auth.aaa.com/"
}
};
// If you still get signature key resolution errors in k8s (IDX10500),
// you can temporarily add this workaround:
/*
options.TokenValidationParameters.SignatureValidator = (token, parameters) =>
{
var jwt = new Microsoft.IdentityModel.JsonWebTokens.JsonWebToken(token);
return jwt;
};
*/
});
}
}
And in each service’s config (values via env in k8s):
"AuthServer": {
"Authority": "https://auth.aaa.com",
"RequireHttpsMetadata": "true"
}
Every service must use this same AuthServer:Authority.
Do not mix in http:// or internal hostnames here.
IdentityClients)If services call other services using client credentials (IdentityClients), also align their Authority:
"IdentityClients": {
"Default": {
"Authority": "https://auth.aaa.com",
"ClientId": "MyService_Internal",
"ClientSecret": "…",
"GrantType": "client_credentials",
"Scope": "MyScope"
}
}
Again, same https://auth.aaa.com everywhere.
From a pod (e.g., admin microservice), run:
curl -k https://auth.aaa.com/.well-known/openid-configuration
You should see JSON whose:
issuer is https://auth.aaa.com/https://auth.aaa.com/...If that’s not true, fix DNS / ingress until it is.
On one failing service (e.g., Administration microservice):
Enable IdentityModel debug logs as in the article.
Confirm that at startup or at first authenticated request it tries to fetch:
https://auth.aaa.com/.well-known/openid-configuration
If it instead goes to http://... or localhost, your Authority is still wrong in that service.
Once:
https://auth.aaa.com/AuthServer:Authority = https://auth.aaa.comTokenValidationParameters.ValidIssuers include https://auth.aaa.com/your invalid_token / issuer errors will disappear and the services will start authenticating requests.
ValidIssuers and SetIssuer):ValidIssuers added:ValidIssuers + k8s workaround solved “no service can authenticate” in ABP microservices:This response was generated by AI. If it didn’t solve your problem, feel free to post a new answer — our support team will jump in to help you soon.
The new error:
WWW-Authenticate: Bearer error="invalid_token",
error_description="The issuer 'https://auth.routilia.com/' is invalid"
means: the token’s iss = https://auth.routilia.com/ but the API that validates the token does not accept this issuer as valid.
Configure both AuthServer and all APIs / gateways so that:
https://auth.routilia.com/In your AuthServer module:
public override void PreConfigureServices(ServiceConfigurationContext context)
{
var configuration = context.Services.GetConfiguration();
PreConfigure<OpenIddictServerBuilder>(builder =>
{
// Must be the public URL of your AuthServer
builder.SetIssuer(new Uri(configuration["AuthServer:Authority"]!));
});
}
And in appsettings.json (or k8s env vars) of AuthServer:
"AuthServer": {
"Authority": "https://auth.routilia.com",
"RequireHttpsMetadata": "true"
}
Make sure the AuthServer is actually reachable at https://auth.routilia.com/.well-known/openid-configuration from inside the cluster.
In every service that receives bearer tokens (web app, public web, gateways, microservices), configure JWT like this:
using Microsoft.AspNetCore.Authentication.JwtBearer;
using Microsoft.IdentityModel.Tokens;
using Volo.Abp;
using Volo.Abp.Modularity;
public class YourServiceModule : AbpModule
{
public override void ConfigureServices(ServiceConfigurationContext context)
{
var configuration = context.Services.GetConfiguration();
context.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddAbpJwtBearer(options =>
{
options.Authority = configuration["AuthServer:Authority"]; // "https://auth.routilia.com"
options.Audience = "YourServiceName"; // must match token 'aud'
options.TokenValidationParameters = new TokenValidationParameters
{
// Accept the issuer seen in the token:
ValidIssuers = new[]
{
configuration["AuthServer:Authority"].EnsureEndsWith('/')
// -> "https://auth.routilia.com/"
}
};
});
}
}
And in appsettings.json or k8s env of each API:
"AuthServer": {
"Authority": "https://auth.routilia.com",
"RequireHttpsMetadata": "true"
}
If you previously used some internal HTTP URL (like http://auth-svc or http://localhost) as Authority or issuer, remove it and consistently use the same HTTPS public URL everywhere.
IdentityClients (for internal HTTP calls)For services that call others via IdentityModel (client credentials), ensure their IdentityClients:Default:Authority also points to the same HTTPS URL:
"IdentityClients": {
"Default": {
"Authority": "https://auth.routilia.com",
"ClientId": "YourClient",
"ClientSecret": "YourSecret",
"GrantType": "client_credentials",
"Scope": "YourScope"
}
}
Once:
SetIssuer) = https://auth.routilia.com/Authority and ValidIssuers include https://auth.routilia.com/the invalid_token / “issuer is invalid” error will disappear.
IDX10204 / issuer problems and Kubernetes deployments (includes the same ValidIssuers and SetIssuer patterns used above):This response was generated by AI. If it didn’t solve your problem, feel free to post a new answer — our support team will jump in to help you soon.
That log means the API that validates the token has no issuer configured in its TokenValidationParameters, so it can’t match the iss in the token.
This is common when running behind Kubernetes / reverse proxy.
Configure JWT bearer explicitly in the service where the error happens (web app, gateway, microservice, etc.), and set ValidIssuers to your AuthServer authority (the public URL that issues the token).
Example (in the API / gateway project):
using Microsoft.AspNetCore.Authentication.JwtBearer;
using Microsoft.IdentityModel.Tokens;
using Volo.Abp;
using Volo.Abp.Modularity;
public class YourApiModule : AbpModule
{
public override void ConfigureServices(ServiceConfigurationContext context)
{
var configuration = context.Services.GetConfiguration();
context.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddAbpJwtBearer(options =>
{
options.Authority = configuration["AuthServer:Authority"]; // e.g. "https://your-authserver.yourdomain.com"
options.Audience = "YourServiceName"; // must match 'aud' in token
options.TokenValidationParameters = new TokenValidationParameters
{
// Fixes: IDX10204: Unable to validate issuer...
ValidIssuers = new[]
{
configuration["AuthServer:Authority"].EnsureEndsWith('/')
}
};
});
}
}
Key points:
AuthServer:Authority must be the public URL of the AuthServer that issues the token (for example: https://auth.yourdomain.com/), and you must use the same value in:
OpenIddict issuer)./ (issuer in tokens generated by OpenIddict normally does).context.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddAbpJwtBearer(options =>
{
options.Authority = configuration["AuthServer:Authority"];
options.Audience = "YourServiceName";
options.TokenValidationParameters = new TokenValidationParameters
{
ValidIssuers = new[]
{
configuration["AuthServer:Authority"].EnsureEndsWith('/')
},
// Optional workaround when keys cannot be resolved in K8s:
SignatureValidator = (token, parameters) =>
{
var jwt = new Microsoft.IdentityModel.JsonWebTokens.JsonWebToken(token);
return jwt;
}
};
});
Also verify on the AuthServer side (OpenIddict):
PreConfigure<OpenIddictServerBuilder>(builder =>
{
var configuration = context.Services.GetConfiguration();
builder.SetIssuer(new Uri(configuration["AuthServer:Authority"]));
});
So the issuer in the token and the authority / valid issuer in your APIs are identical.
This response was generated by AI. If it didn’t solve your problem, feel free to post a new answer — our support team will jump in to help you soon.
Inside Docker, NuGet SSL problems usually come from the container not trusting the same certificates as your host (corporate proxy, dev HTTPS certs, man‑in‑the‑middle scanners, etc.). Below are concrete ways to fix this when building/running ABP apps in Docker.
dotnet restore (NuGet feed SSL)Check the exact error message
Most common messages are like:
The SSL connection could not be establishedUnable to load the service index for source 'https://....'Authentication failed because the remote party has closed the transport streamIf you are behind a corporate proxy with its own CA
You must import the proxy’s root certificate into the Docker image so dotnet/NuGet trusts it.
Typical Dockerfile pattern (Linux Debian/Ubuntu base images):
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /src
# Copy your corporate/proxy root certificate to the image
COPY mycorp-root-ca.crt /usr/local/share/ca-certificates/mycorp-root-ca.crt
# Register certificate in container trust store
RUN update-ca-certificates
# Now restore & build
COPY *.sln ./
COPY src/ ./src/
RUN dotnet restore
RUN dotnet build -c Release -o /app/build
Notes:
mycorp-root-ca.crt (Base‑64).update-ca-certificates, the dotnet CLI and NuGet will use the updated trust store.If using a private NuGet feed with self‑signed certificate
Temporarily test by disabling HTTPS verification (not recommended for production)
Only to confirm it’s an SSL trust issue, you can temporarily point to an HTTP feed (if available) or use another public feed (like nuget.org) in NuGet.config and see if restore succeeds. If yes, then the problem is definitely certificate/trust.
When developing ABP apps with Docker (especially microservice template), you often use local HTTPS endpoints. For development, ABP docs recommend using mkcert to create locally trusted certificates:
Install mkcert on your host
Follow mkcert’s guide:
mkcert -installmkcert localhost
localhost.pem and localhost-key.pem (or .crt/.key).Use mkcert for your development certificates in Docker
Typical pattern:
FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base
WORKDIR /app
# Copy mkcert-generated certificate into container
COPY certs/localhost.pem /https/aspnetapp.crt
COPY certs/localhost-key.pem /https/aspnetapp.key
# Configure Kestrel via environment in docker-compose.yml:
# ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.crt
# ASPNETCORE_Kestrel__Certificates__Default__KeyPath=/https/aspnetapp.key
This way both your browser and the container trust the same dev certificate.
nuget.abp.io (ABP Commercial feeds) via HTTPSabp login <your-email>
dotnet nuget list source
Ensure the ABP feed is HTTPS and works locally.
Propagate any custom CA/proxy trust to Docker
If your environment injects certificates (proxy, SSL inspection), you must import that CA as shown in section A.
Avoid turning off SSL validation
Don’t use DOTNET_SYSTEM_NET_HTTP_USESOCKETSHTTPHANDLER=false or similar workarounds in production – fix trust instead.
curl https://api.nuget.org/v3/index.json inside the container?
mkcert for dev ABP microservices?
This response was generated by AI. If it didn’t solve your problem, feel free to post a new answer — our support team will jump in to help you soon.
The runtime error:
MissingMethodException: Method not found:
'Void Volo.Abp.Localization.LanguageInfo..ctor(System.String, System.String, System.String, System.String)'
is caused by a version mismatch between your ABP Framework packages and the LeptonX theme package.
In ABP 8.3 the LanguageInfo type (from Volo.Abp.Localization) exposes only the 3‑parameter constructor.
LeptonX 3.1 was compiled against an earlier ABP version where the 4‑parameter constructor still existed. At runtime, the CLR tries to call that old 4‑parameter constructor and fails with MissingMethodException.
Fix
.csproj and update the LeptonX package:<PackageReference Include="Volo.Abp.AspNetCore.Mvc.UI.Theme.LeptonX" Version="3.3.*" />
(or explicitly, e.g. 3.3.0/3.3.1, matching your ABP 8.3 minor version).
dotnet list package
If you see mixed versions, update them so every Volo.Abp.* package is 8.3.x and LeptonX is 3.3.x.
abp clean
dotnet restore
dotnet build
dotnet run
After upgrading LeptonX to 3.3.x, it will be compiled against the correct LanguageInfo constructor (3 parameters), and the MissingMethodException will be gone.
This response was generated by AI. If it didn’t solve your problem, feel free to post a new answer — our support team will jump in to help you soon.
In the EasyCRM Angular app the text shown in the browser tab is the HTML <title> tag of the Angular app, not an ABP setting.
Change it in angular/src/index.html:
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8" />
<!-- CHANGE THIS LINE -->
<title>My New CRM Title</title>
<base href="/" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<link rel="icon" type="image/x-icon" href="favicon.ico" />
</head>
<body>
<app-root></app-root>
</body>
</html>
yarn start or npm start).If you later want localized, per‑page titles, you can additionally use a custom TitleStrategy (as shown in the second source below), but for simply replacing “MyProjectName” the index.html <title> tag is the place to change.
This response was generated by AI. If it didn’t solve your problem, feel free to post a new answer — our support team will jump in to help you soon.
That almost always means:
v10 moved everything fully to OpenIddict and tightened validation. You must realign:
Below is a concrete, minimal set of checks/fixes that resolves this in practice.
In API‑1:
public class LoggingHttpClientAuthenticator : IRemoteServiceHttpClientAuthenticator, ISingletonDependency
{
private readonly IRemotingTokenStore _tokenStore;
private readonly ILogger<LoggingHttpClientAuthenticator> _logger;
public LoggingHttpClientAuthenticator(
IRemotingTokenStore tokenStore,
ILogger<LoggingHttpClientAuthenticator> logger)
{
_tokenStore = tokenStore;
_logger = logger;
}
public async Task Authenticate(RemoteServiceHttpClientAuthenticateContext context)
{
context.Request.Headers.Authorization = null;
var token = await _tokenStore.GetTokenAsync(CancellationToken.None);
_logger.LogInformation("Outgoing inter-service access token: {Token}", token);
if (!string.IsNullOrEmpty(token))
{
context.Request.Headers.Add("Authorization", $"Bearer {token}");
}
}
}
Authorization: Bearer <copied-token>If you still get 403, then the problem is not ABP remote services, but access token configuration / scopes / audiences between AuthServer, API‑1 client, and APIs 2 & 3.
In your AuthServer (Identity service), open your OpenIddictDataSeedContributor (or equivalent) and check the application (client) that represents API‑1.
It must have:
client_type = confidentialclient_credentialsRoughly like the official template:
// scopes
await CreateScopesAsync(context, new[]
{
"Api2",
"Api3",
OpenIddictConstants.Scopes.Email,
OpenIddictConstants.Scopes.Profile,
OpenIddictConstants.Scopes.Roles
});
// client for API-1
await CreateApplicationAsync(
name: "Api1_Internal_Client",
type: OpenIddictConstants.ClientTypes.Confidential,
consentType: OpenIddictConstants.ConsentTypes.Systematic,
displayName: "Api1 internal client",
secret: "VERY-SECRET",
grantTypes: new[]
{
OpenIddictConstants.GrantTypes.ClientCredentials
},
scopes: new[]
{
"Api2",
"Api3"
});
Make sure the scope names you put here match what APIs 2 & 3 are configured to accept (next section).
If the client was migrated from IdentityServer4 config, the allowed scopes/resources often need to be re‑created in this OpenIddict seeding.
In each resource API (2 and 3) you normally have something like this in the module’s ConfigureServices:
public override void ConfigureServices(ServiceConfigurationContext context)
{
var configuration = context.Services.GetConfiguration();
context.Services.AddAuthentication("Bearer")
.AddJwtBearer("Bearer", options =>
{
options.Authority = configuration["AuthServer:Authority"]; // e.g. https://auth.mycompany.com
options.RequireHttpsMetadata = true;
options.Audience = "Api2"; // or "Api3" for the 3rd API
});
// ...
}
Or, if you use OpenIddict validation:
PreConfigure<OpenIddictValidationBuilder>(builder =>
{
builder.AddAudiences("Api2"); // or "Api3"
builder.UseLocalServer();
builder.UseAspNetCore();
});
Now match that:
Audience / AddAudiences("Api2") must equal the scope name you granted to API‑1’s client (previous step).Api2 Api3 but API‑2 is configured with Audience = "MyApi2" or something else, the token will be considered not for this API, and authorization will fail.So:
"Api2" and "Api3".OpenIddictDataSeedContributor (scope + client’s allowed scopes).Audience or AddAudiences).Your IRemotingTokenStore (or equivalent) in API‑1 must request the same scopes:
public class RemotingClientOptions
{
public string ClientName { get; set; }
public string ClientSecret { get; set; }
public string Scope { get; set; } // e.g. "Api2 Api3"
}
appsettings.json in API‑1:
"AuthServer": {
"Authority": "https://<your-auth-server>"
},
"Remoting": {
"Client": {
"ClientName": "Api1_Internal_Client",
"ClientSecret": "VERY-SECRET",
"Scope": "Api2 Api3"
}
}
Token store (simplified):
var tokenResponse = await client.RequestClientCredentialsTokenAsync(
new ClientCredentialsTokenRequest
{
Address = discovery.TokenEndpoint,
ClientId = clientId,
ClientSecret = clientSecret,
Scope = scope // "Api2 Api3"
},
cancellationToken);
If in v9 you used older IdentityServer4 client‑credentials with different scope names, after migrating to OpenIddict those old names will not work. You must request scopes that actually exist in OpenIddict.
Once the token and scopes are fixed, your existing ABP remote service setup is usually fine:
// appsettings.json of API-1
"RemoteServices": {
"Api2": {
"BaseUrl": "https://api2.yourdomain"
},
"Api3": {
"BaseUrl": "https://api3.yourdomain"
}
}
// in API-1 module
context.Services.AddHttpClientProxies(
typeof(Api2ApplicationContractsModule).Assembly,
remoteServiceConfigurationName: "Api2"
);
context.Services.AddHttpClientProxies(
typeof(Api3ApplicationContractsModule).Assembly,
remoteServiceConfigurationName: "Api3"
);
Together with an IRemoteServiceHttpClientAuthenticator that attaches the bearer token, those parts typically don’t change between v9 and v10.
Summary of what to actually change
In AuthServer
Api2, Api3).client_credentials and allowed scopes Api2, Api3.In APIs 2 & 3
Api2 / Api3 (same as scopes above).In API‑1
Scope = "Api2 Api3".IRemoteServiceHttpClientAuthenticator to attach that token.Debug once
IRemotingTokenStore, IRemoteServiceHttpClientAuthenticator, scopes, and audiences):This response was generated by AI. If it didn’t solve your problem, feel free to post a new answer — our support team will jump in to help you soon.
A 403 from API‑2 / API‑3 means:
So the problem is not “remote service call configuration”, but application‑to‑application authorization after the v9→v10 upgrade (where identity/auth stack has changed).
On each modular layered API (2 & 3):
Check the authorize attributes
[Authorize("SomePolicy")]
// or
[Authorize(MyPermissions.Foo.Bar)]
ensure that the **principal represented by the token coming from API‑1** is actually allowed for that policy / permission.
If you use permissions, ensure the calling client has them
When you use ABP’s permission system on APIs:
ClientPermissionValueProvider).In a microservice/zero‑trust setup this is typically done via a central identity/permission service; see the “Zero Trust Microservice Architecture with ABP Framework” article for a complete pattern:
// Example central permission check service
public interface IPermissionCheckerAppService : IApplicationService
{
Task<bool> CheckPermissionAsync(CheckPermissionInput input);
Task<MultiplePermissionGrantResultDto> CheckPermissionsAsync(CheckPermissionsInput input);
}
Check that the token subject is what you expect
ICurrentUser.IsAuthenticated or user‑based permissions), they will deny requests coming from a pure client token.On the single‑layer API‑1, which calls 2 & 3 via remote service / HTTP client:
Confirm how the token is added to outgoing calls
You should have something equivalent to an IRemoteServiceHttpClientAuthenticator that sets the Authorization: Bearer <token> header for all dynamic client proxy calls, e.g.:
[Dependency(ReplaceServices = true)]
[ExposeServices(typeof(IRemoteServiceHttpClientAuthenticator))]
public class SampleServiceHttpClientAuthenticator :
IRemoteServiceHttpClientAuthenticator, ISingletonDependency
{
private readonly IRemotingTokenStore _tokenStore;
public async Task Authenticate(RemoteServiceHttpClientAuthenticateContext context)
{
context.Request.Headers.Authorization = null;
var token = await _tokenStore.GetTokenAsync(CancellationToken.None);
if (!string.IsNullOrEmpty(token))
{
context.Request.Headers.Add("Authorization", $"Bearer {token}");
}
}
}
If after the upgrade this authenticator is not registered or not executed, API‑2/3 will receive a request without a valid token and return 401/403.
Check client credentials used by API‑1
If API‑1 uses client credentials to obtain a token (as in the “Zero Trust Microservice Architecture” sample):
appsettings.json) that:
AuthServer:Authority points to the correct identity server.Remoting:Client:ClientName, ClientSecret, and Scope (or equivalent) match a client that exists and is allowed to call API‑2/3.Ensure RemoteServices base URLs are correct
In API‑1, for each remote service:
"RemoteServices": {
"Service2": {
"BaseUrl": "https://<api2-host>"
},
"Service3": {
"BaseUrl": "https://<api3-host>"
}
}
and in your module:
context.Services.AddHttpClientProxies(
typeof(Service2ApplicationContractsModule).Assembly,
remoteServiceConfigurationName: "Service2"
);
context.Services.AddHttpClientProxies(
typeof(Service3ApplicationContractsModule).Assembly,
remoteServiceConfigurationName: "Service3"
);
Misconfigured remote service name or base URL won’t usually produce 403, but verify them while you’re here.
Based on similar ABP microservice setups (including the referenced zero‑trust sample):
Create/verify a client in your identity server representing API‑1, with:
Grant required permissions to that client (if you use centralized permission management and client permission provider).
Configure API‑1 to use those client credentials when requesting tokens and to attach the token on all remote calls (via an IRemoteServiceHttpClientAuthenticator).
Align [Authorize] usage on API‑2 / API‑3 so that:
Test directly with the same token
The Zero Trust Microservice Architecture with ABP Framework article shows a complete, working pattern of:
IRemotingTokenStore + custom IRemoteServiceHttpClientAuthenticator for inter‑service calls.You can align your solution with that pattern to make your inter‑service calls and permissions robust.
After upgrading to ABP 10, you must:
[Authorize] attributes and permission assignments accordingly.Once client credentials, scopes/permissions, and the HTTP client authenticator are correctly configured, the 403s from your modular APIs will stop.
This response was generated by AI. If it didn’t solve your problem, feel free to post a new answer — our support team will jump in to help you soon.