Okay i have changed all my aggregate roots to implement imultitenant interface. Thank you for the explanation.
I have tested it. it works now thank you @liangshiwei
Hello @liangshiwei, I am kind of lost over here, i know that i am asking too many questions but i want to understand truly so i can build it up according to that. thank you for your patience with me :)
As i know tenantid is useful when you have single database and have multi tenants in that single database. But what i am trying to build is to create separate databases for each tenant. Then i believe i don't need any TenantId since it is encapsulated inside one database for each tenant. But as i see it in my code when the aggregate root doesn't have any imultitenant interface even the tenant has separate connection string, it looks for the shared database.
So i come to conclusion that in that separate database even if i do not need TenantId, i should have TenantId so the abp can do the operation on Separate Database?
Did i get it right? or am i missing sth over here?
Ok i see, It seems tables created on Separate database without imultitenant interface is never used.
Then can we come to the conclusion..
if you are building a multitenant system and if you want to use separate db connection strings, even if it is not necessary to have tenantid in tables since you use separate db connection strings, you should use it in abp.
Hello again, Maybe it is little unclear what i want to say. As an example If Tenant A has a separate db connection string and table does not have any imultitenant interface and if you do the operation on behalf of that current tenant, shouldn't it look at the Tenant A database first? That's what i am trying to achieve. So in my application,
I have shared database which has connection string.
Host=localhost;Port=5432;Database=Adzup;
and "Tenant A" has connection string
Host=localhost;Port=5432;Database=Adzup_TenantA;
when i apply my migrations... Playlist, File and PlaylistFile tables are created for both databases. and when i do the insert
public async Task CreateBatchAsync(Guid playlistId, CreateOrUpdatePlaylistFilesDto input)
{
await _playlistRepository.GetAsync(playlistId);
var newFileIds = input.FileIds.ToList();
var playlistFiles = await _playlistFileManager.CreateBatchAsync(playlistId, newFileIds);
foreach (var playlistFile in playlistFiles)
{
await _playlistFileRepository.InsertAsync(playlistFile);
}
}
i am expecting playlistfiles should be inserted with the connection string Host=localhost;Port=5432;Database=Adzup_TenantA;
since my CurrentTenant has this connection string.
but instead it is trying to insert it with Host=localhost;Port=5432;Database=Adzup;
i can see that foreign key that is mentioned in the error is inside the db Host=localhost;Port=5432;Database=Adzup_TenantA;
ABP Framework version: v8.1.4
UI Type: Angular
Database System: EF Core ( PostgreSQL)
Tiered (for MVC) or Auth Server Separated (for Angular): yes
Exception message and full stack trace: [17:50:42 INF] fail: 11/16/2024 17:50:42.340 CoreEventId.SaveChangesFailed[10000] (Microsoft.EntityFrameworkCore.Update) An exception occurred in the database while saving changes for context type 'Doohlink.PlaylistManagement.EntityFrameworkCore.PlaylistManagementDbContext'. Microsoft.EntityFrameworkCore.DbUpdateException: An error occurred while saving the entity changes. See the inner exception for details. ---> Npgsql.PostgresException (0x80004005): 23503: insert or update on table "PlaylistManagementPlaylistFiles" violates foreign key constraint "FK_PlaylistManagementPlaylistFiles_PlaylistManagementFiles_Fil~"
DETAIL: Key (FileId)=(3a1647cd-ead1-7bbc-8095-7c274e70176b) is not present in table "PlaylistManagementFiles".
Steps to reproduce the issue: Hello, I have a general question about data filtering. I have 3 tables in my database Playlist,File and PlaylistFile. you can see the entities and aggregate roots below.
File
public class File : AggregateRoot<Guid>, IMultiTenant, ISoftDelete
{
public bool IsDeleted { get; protected set; }
public Guid? TenantId { get; protected set; }
public string Name { get; private set; }= null!;
//rest of the code
}
Playlist
public class Playlist : FullAuditedAggregateRoot<Guid>, IMultiTenant
{
public Guid? TenantId { get; protected set; }
public string Name { get; private set; } = null!;
public string? Description { get; set; }
//rest of the code
}
PlaylistFile
public class PlaylistFile:CreationAuditedAggregateRoot<Guid>,ISoftDelete
{
public bool IsDeleted { get; protected set; }
public Guid PlaylistId { get; private set; }
public Guid FileId { get; private set; }
//rest of the code
}
so as you can see playlist and file is having imultitenant interface but since playlistfile belongs to two other table, i preferred to not make it imultitenat. The problem i am having is when i use separete db for a tenant. since PlaylistFile doesn't have imultitenant interface, when i use separate db for tenant even if the current tenant is the tenant with separate db connection string, it tries to insert it to the shared database. Here is an example from an app service.
public async Task CreateBatchAsync(Guid playlistId, CreateOrUpdatePlaylistFilesDto input)
{
await _playlistRepository.GetAsync(playlistId);
var newFileIds = input.FileIds.ToList();
var playlistFiles = await _playlistFileManager.CreateBatchAsync(playlistId, newFileIds);
foreach (var playlistFile in playlistFiles)
{
await _playlistFileRepository.InsertAsync(playlistFile);
}
}
in this code _playlistRepository is looking at the separate db since it has imultitenant interface but when i try to insert the records through _playlistFileRepository it takes the shared connection string. And i am getting exception since no fileid is present on the shared db. Is this common behavior? I know that i can give imultitenant interface to PlaylistFile aggregate root, but i do not prefer that since it is a table that will reflect the tenantid from their parent table. Is there any other way to fix it?
Ok i think i get it now.
public async Task RemoveManyAsync(IEnumerable<string> keys, CancellationToken token = default)
{
keys = Check.NotNull(keys, nameof(keys));
token.ThrowIfCancellationRequested();
await ConnectAsync(token);
await RedisDatabase.KeyDeleteAsync(keys.Select(key => (RedisKey)(Instance + key)).ToArray());
}
is changed to
protected virtual Task[] PipelineRemoveMany(
IDatabase cache,
IEnumerable<string> keys)
{
return keys.Select(key => cache.KeyDeleteAsync(InstancePrefix.Append(key))).ToArray<Task>();
}
so it does single key operations one by one even if it has multi keys. i will try this and let you know, thank you for the assistance.
Hello again, Can you explain little what has changed in the code, so it can solve the problem? Maybe i am wrong, but i couldn't see any hashtags in the code. I couldn't try your code because my app is in production right now, i need to create a staging environment to try it so i will try it at the weekend. It seems like changes are related with expiration date? I was expecting something like this instead
protected virtual Task[] PipelineSetMany(
IEnumerable<KeyValuePair<string, byte[]>> items,
DistributedCacheEntryOptions options)
{
items = Check.NotNull(items, nameof(items));
options = Check.NotNull(options, nameof(options));
var itemArray = items.ToArray();
var tasks = new Task[itemArray.Length];
var creationTime = DateTimeOffset.UtcNow;
var absoluteExpiration = GetAbsoluteExpiration(creationTime, options);
for (var i = 0; i < itemArray.Length; i++)
{
var keyWithHashTag = $"{{{Instance}}}{itemArray[i].Key}";
tasks[i] = RedisDatabase.ScriptEvaluateAsync(GetSetScript(), new RedisKey[] { keyWithHashTag },
new RedisValue[]
{
absoluteExpiration?.Ticks ?? NotPresent,
options.SlidingExpiration?.Ticks ?? NotPresent,
GetExpirationInSeconds(creationTime, absoluteExpiration, options) ?? NotPresent,
itemArray[i].Value
});
}
return tasks;
}
important part is
var keyWithHashTag = $"{{{Instance}}}{itemArray[i].Key}";
probably it shouldn't be instance but sth similar to this so the keys can go to the db with hashtags. Actually i have seen key normalizer class while i am looking at the code, I think that should be the one that i need to override. sth similar like this.
public class DistributedCacheKeyNormalizer : IDistributedCacheKeyNormalizer, ITransientDependency
{
protected ICurrentTenant CurrentTenant { get; }
protected AbpDistributedCacheOptions DistributedCacheOptions { get; }
public DistributedCacheKeyNormalizer(
ICurrentTenant currentTenant,
IOptions<AbpDistributedCacheOptions> distributedCacheOptions)
{
CurrentTenant = currentTenant;
DistributedCacheOptions = distributedCacheOptions.Value;
}
public virtual string NormalizeKey(DistributedCacheKeyNormalizeArgs args)
{
var normalizedKey = $"c:{args.CacheName},k:{DistributedCacheOptions.KeyPrefix}{args.Key}";
if (!args.IgnoreMultiTenancy && CurrentTenant.Id.HasValue)
{
normalizedKey = $"t:{{{CurrentTenant.Id.Value}}},{normalizedKey}";
}
return normalizedKey;
}
}
so all the values with same tenant can be at the same slot. But as i say i will try your code and this code at the weekend to test, then i can post it over here what i have found.
Hello @liangshiwei is there any tutorial, sample or video to do this interception? and what are the prerequisites for it? probably you need to create your docker image in debug mode (or maybe not) and according to docs i guess it is forwarding all the traffic to your local environment. Is this just port forwarding or sth else in play over here? I have an app that is created with abp cli, and i don't know where to start to do the interception, it would be great to have a tutorial for existing applications.
Hello again, Redis cluster is already responsible of data consistency, but the problem is when you do multikey operations (like RemoveMany), if they are on different nodes on the cluster, it is always going to throw an error. It doesn't matter from where you get your redis-cluster. So application needs to handle these cases. There are couple of ways to do this. Here are 2 short articles that you can look at. https://www.dragonflydb.io/error-solutions/crossslot-keys-in-request-dont-hash-to-the-same-slot https://medium.com/@mbh023/redis-multi-key-command-in-cluster-mode-feat-cross-slot-ec27b999f169
As i see there are 4 different solutions to fix the problem. 1- use hash-tags (mostly used) while you are inserting your data you should insert it with curly braces {}, so redis can set the data to the same node and when you do removemany then it won't throw an error since all the data are going to be at the same node. 2- you could skip to use removemany instead you can do single operations (like Remove() ) but that's going to be more slow if you have lots of things to remove. 3- another solution on the second article is to use the same algorithm that redis using to decide about where the data is so you can group your keys to delete according to where the nodes are. 4- Use single instance of redis since every record is going to be at the same node.(If you need performance, this can be a bottleneck in your app)
Hope i could make my point.