Starts in:
2 DAYS
18 HRS
15 MIN
19 SEC
Starts in:
2 D
18 H
15 M
19 S

Activities of "cangunaydin"

Okay i have changed all my aggregate roots to implement imultitenant interface. Thank you for the explanation.

I have tested it. it works now thank you @liangshiwei

Hello @liangshiwei, I am kind of lost over here, i know that i am asking too many questions but i want to understand truly so i can build it up according to that. thank you for your patience with me :)

As i know tenantid is useful when you have single database and have multi tenants in that single database. But what i am trying to build is to create separate databases for each tenant. Then i believe i don't need any TenantId since it is encapsulated inside one database for each tenant. But as i see it in my code when the aggregate root doesn't have any imultitenant interface even the tenant has separate connection string, it looks for the shared database.

So i come to conclusion that in that separate database even if i do not need TenantId, i should have TenantId so the abp can do the operation on Separate Database?

Did i get it right? or am i missing sth over here?

Ok i see, It seems tables created on Separate database without imultitenant interface is never used.

Then can we come to the conclusion..

if you are building a multitenant system and if you want to use separate db connection strings, even if it is not necessary to have tenantid in tables since you use separate db connection strings, you should use it in abp.

Hello again, Maybe it is little unclear what i want to say. As an example If Tenant A has a separate db connection string and table does not have any imultitenant interface and if you do the operation on behalf of that current tenant, shouldn't it look at the Tenant A database first? That's what i am trying to achieve. So in my application,

I have shared database which has connection string. Host=localhost;Port=5432;Database=Adzup;

and "Tenant A" has connection string Host=localhost;Port=5432;Database=Adzup_TenantA;

when i apply my migrations... Playlist, File and PlaylistFile tables are created for both databases. and when i do the insert

public async Task CreateBatchAsync(Guid playlistId, CreateOrUpdatePlaylistFilesDto input)
    {
        await _playlistRepository.GetAsync(playlistId);
        var newFileIds = input.FileIds.ToList();
        var playlistFiles = await _playlistFileManager.CreateBatchAsync(playlistId, newFileIds);
        foreach (var playlistFile in playlistFiles)
        {
            await _playlistFileRepository.InsertAsync(playlistFile);
        }
    }

i am expecting playlistfiles should be inserted with the connection string Host=localhost;Port=5432;Database=Adzup_TenantA; since my CurrentTenant has this connection string. but instead it is trying to insert it with Host=localhost;Port=5432;Database=Adzup; i can see that foreign key that is mentioned in the error is inside the db Host=localhost;Port=5432;Database=Adzup_TenantA;

Ok i think i get it now.

public async Task RemoveManyAsync(IEnumerable<string> keys, CancellationToken token = default)
    {
        keys = Check.NotNull(keys, nameof(keys));

        token.ThrowIfCancellationRequested();
        await ConnectAsync(token);

        await RedisDatabase.KeyDeleteAsync(keys.Select(key => (RedisKey)(Instance + key)).ToArray());
    }

is changed to

protected virtual Task[] PipelineRemoveMany(
        IDatabase cache,
        IEnumerable<string> keys)
    {
        return keys.Select(key => cache.KeyDeleteAsync(InstancePrefix.Append(key))).ToArray<Task>();
    }

so it does single key operations one by one even if it has multi keys. i will try this and let you know, thank you for the assistance.

Hello again, Can you explain little what has changed in the code, so it can solve the problem? Maybe i am wrong, but i couldn't see any hashtags in the code. I couldn't try your code because my app is in production right now, i need to create a staging environment to try it so i will try it at the weekend. It seems like changes are related with expiration date? I was expecting something like this instead

protected virtual Task[] PipelineSetMany(
    IEnumerable<KeyValuePair<string, byte[]>> items,
    DistributedCacheEntryOptions options)
{
    items = Check.NotNull(items, nameof(items));
    options = Check.NotNull(options, nameof(options));

    var itemArray = items.ToArray();
    var tasks = new Task[itemArray.Length];
    var creationTime = DateTimeOffset.UtcNow;
    var absoluteExpiration = GetAbsoluteExpiration(creationTime, options);

    for (var i = 0; i < itemArray.Length; i++)
    {
        var keyWithHashTag = $"{{{Instance}}}{itemArray[i].Key}";
        tasks[i] = RedisDatabase.ScriptEvaluateAsync(GetSetScript(), new RedisKey[] { keyWithHashTag },
            new RedisValue[]
            {
                absoluteExpiration?.Ticks ?? NotPresent,
                options.SlidingExpiration?.Ticks ?? NotPresent,
                GetExpirationInSeconds(creationTime, absoluteExpiration, options) ?? NotPresent,
                itemArray[i].Value
            });
    }

    return tasks;
}

important part is var keyWithHashTag = $"{{{Instance}}}{itemArray[i].Key}"; probably it shouldn't be instance but sth similar to this so the keys can go to the db with hashtags. Actually i have seen key normalizer class while i am looking at the code, I think that should be the one that i need to override. sth similar like this.

public class DistributedCacheKeyNormalizer : IDistributedCacheKeyNormalizer, ITransientDependency
{
    protected ICurrentTenant CurrentTenant { get; }

    protected AbpDistributedCacheOptions DistributedCacheOptions { get; }

    public DistributedCacheKeyNormalizer(
        ICurrentTenant currentTenant,
        IOptions<AbpDistributedCacheOptions> distributedCacheOptions)
    {
        CurrentTenant = currentTenant;
        DistributedCacheOptions = distributedCacheOptions.Value;
    }

    public virtual string NormalizeKey(DistributedCacheKeyNormalizeArgs args)
    {
        var normalizedKey = $"c:{args.CacheName},k:{DistributedCacheOptions.KeyPrefix}{args.Key}";

        if (!args.IgnoreMultiTenancy && CurrentTenant.Id.HasValue)
        {
            normalizedKey = $"t:{{{CurrentTenant.Id.Value}}},{normalizedKey}";
        }

        return normalizedKey;
    }
}

so all the values with same tenant can be at the same slot. But as i say i will try your code and this code at the weekend to test, then i can post it over here what i have found.

Hello @liangshiwei is there any tutorial, sample or video to do this interception? and what are the prerequisites for it? probably you need to create your docker image in debug mode (or maybe not) and according to docs i guess it is forwarding all the traffic to your local environment. Is this just port forwarding or sth else in play over here? I have an app that is created with abp cli, and i don't know where to start to do the interception, it would be great to have a tutorial for existing applications.

Hello again, Redis cluster is already responsible of data consistency, but the problem is when you do multikey operations (like RemoveMany), if they are on different nodes on the cluster, it is always going to throw an error. It doesn't matter from where you get your redis-cluster. So application needs to handle these cases. There are couple of ways to do this. Here are 2 short articles that you can look at. https://www.dragonflydb.io/error-solutions/crossslot-keys-in-request-dont-hash-to-the-same-slot https://medium.com/@mbh023/redis-multi-key-command-in-cluster-mode-feat-cross-slot-ec27b999f169

As i see there are 4 different solutions to fix the problem. 1- use hash-tags (mostly used) while you are inserting your data you should insert it with curly braces {}, so redis can set the data to the same node and when you do removemany then it won't throw an error since all the data are going to be at the same node. 2- you could skip to use removemany instead you can do single operations (like Remove() ) but that's going to be more slow if you have lots of things to remove. 3- another solution on the second article is to use the same algorithm that redis using to decide about where the data is so you can group your keys to delete according to where the nodes are. 4- Use single instance of redis since every record is going to be at the same node.(If you need performance, this can be a bottleneck in your app)

Hope i could make my point.

what i mean is intercepting the service. i just want to connect to the cluster and intercept the service so i can attach my debugger to the remote aks cluster. Do you need to deploy your app through Abp Studio for it? or can you do that with manual deployment? https://abp.io/docs/latest/studio/kubernetes#intercept-a-service

Utilizing ABP Studio's interception feature, you have the flexibility to run the entire solution in a Kubernetes cluster while running only a single (or a few) services on your local machine using your IDE. This approach allows you to concentrate on running, testing, and debugging your service without concerning yourself with the configuration and launch details of the rest of the system.

Showing 1 to 10 of 94 entries
Made with ❤️ on ABP v9.1.0-preview. Updated on November 20, 2024, 13:06