hi
You should use the services related to ABP permission management instead of directly reading/writing to the Redis cache.
I didn't intend to use some non-standard methods. I started from usual IPermissionManager
methods. But it only returns me "per-application" permissions, not all. Please give me a real example of the code which would return the permissions for ALL the running applications.
We used a built-in ABP cache in our ABP framework-based solutions plus Ocelot gateway project which aggregated the permissions from different sites plus RabbitMQ synchronization for updating the permissions cache on all the sites once these permissions are updated on the single Permission Management page.
We now decided to abandon this structure in favor of Redis server cache.
Still, it is not clear for me, how to get the list of ALL the permissions (for all the sites) having now Redis server cache at hand?
I took a look at the keys of the running Redis server for the started applications, but the naming is a mess and it does not look like it contains the permissions in principle... Maybe I need to add some code to manually place ALL the permissions in Redis cache after the application has started?? Which looks a bit weird...
Currently the above scenario did not help me reproduce the exception. I need to wait for another team member who managed to reproduce it such way. Please, wait and do not close the ticket.
Though, I was able to get another kind of exception. Please have a look at this one so far.
This issue has been reported as a bug at GitHub, but ignored for many days, so I have to submit a commercial ticket to get a better attention.
2024-06-20 14:19:04.849 -05:00 INF () Lock is acquired for TokenCleanupBackgroundWorker
[14:19:04 INF] Lock is acquired for TokenCleanupBackgroundWorker
2024-06-20 14:19:04.859 -05:00 ERR () An exception was thrown while activating Volo.Abp.OpenIddict.Tokens.TokenCleanupService -> ?:OpenIddict.Abstractions.IOpenIddictTokenManager -> Volo.Abp.OpenIddict.Tokens.AbpTokenManager -> Volo.Abp.OpenIddict.Tokens.AbpOpenIddictTokenCache -> Volo.Abp.OpenIddict.Tokens.AbpOpenIddictTokenStore.Autofac.Core.DependencyResolutionException: An exception was thrown while activating Volo.Abp.OpenIddict.Tokens.TokenCleanupService -> ?:OpenIddict.Abstractions.IOpenIddictTokenManager -> Volo.Abp.OpenIddict.Tokens.AbpTokenManager -> Volo.Abp.OpenIddict.Tokens.AbpOpenIddictTokenCache -> Volo.Abp.OpenIddict.Tokens.AbpOpenIddictTokenStore.
---> Autofac.Core.DependencyResolutionException: None of the constructors found on type 'Volo.Abp.OpenIddict.Tokens.AbpOpenIddictTokenStore' can be invoked with the available services and parameters:
Cannot resolve parameter 'Volo.Abp.OpenIddict.Tokens.IOpenIddictTokenRepository repository' of constructor 'Void .ctor(Volo.Abp.OpenIddict.Tokens.IOpenIddictTokenRepository, Volo.Abp.Uow.IUnitOfWorkManager, Volo.Abp.Guids.IGuidGenerator, Volo.Abp.OpenIddict.Applications.IOpenIddictApplicationRepository, Volo.Abp.OpenIddict.Authorizations.IOpenIddictAuthorizationRepository, Volo.Abp.OpenIddict.AbpOpenIddictIdentifierConverter, Volo.Abp.OpenIddict.IOpenIddictDbConcurrencyExceptionHandler)'.
We cannot provide the code of our solution. But the scenario we have been able to reproduce this is as follows (I guess there are other ones, this might just give you the idea where in the code to look at):
Is there a way to turn off DB storage instead? We're going to have many large articles and prefer to deal with the file system directly.
We created ABP project and added Volo.Docs module as described in the tutorial. The next step was configuring a document project for "FileSystem" and creating a file with the article.
We faced the following error when trying to go to this article:
ORA-12899 value too large for column "ABDEV"."DOCSDOCUMENT"."CONTENT" (actual: 2501, maximum: 2000).
Why is the file content stored in DB instead of file system? And how this limitation can be avoided? Just to anticipate some your questions: the project is commercial and we cannot share the source code.
I've revealed the additional custom protection mechanism in our project, related to company-licence, i.e. I've found the root cause and it is not related to ABP permissions. Please, restore the points and close the ticket. Sorry.
I am observing a very weird behavior of permissions.
Let's say I have Tenant1... TenantX. I never have had any issues with accessing API resources protected with permissions by the users of these tenants. For simplicity, let's take the role "admin" - this is particularly where the issue is reproduced.
So, now I have received the complaint from TenantY. His users - which have the "admin" role assigned - cannot access specific resources (getting error 403) whereas - what is even more confusing - the other resources (and corresponding permissions) do not have such an issue.
"admin" role for TenantY does not differ from "admin" role in other tenants (at least, in UI).
I checked the DB and tenant settings thoroughly, but cannot see anything unusual.
Also, the data in AbpPermissionGrants looks the same for all tenants "admin" roles, i.e. all relevant tenants "admin" role have the complained permission assigned.
Another important note: now when I create a new tenant - I observe the same issue with error 403.
Do you have the idea what could be wrong, where should I check which settings?
@liangshiwei - ok, if you are sure that this is solely EF Core / Oracle issue, please close the ticket and restore the points. Thanks.
I'm trying to insert about 400 entries to the table, the table is simple. The operation is very slow (maybe it's due to the communication between localhost where the app server is and Azure DB server, but I just want to find out before jumping to the conclusions).
Here is the code (chunkSize is 100):
private async Task TryCreateChunkAsync(List<GeoPoint> geoPoints, int chunkSize)
{
var dbContext = await _geoPointRepository.GetDbContextAsync();
dbContext.ChangeTracker.AutoDetectChangesEnabled = false;
for (var skip = 0; skip < geoPoints.Count; skip += chunkSize)
{
using (var uow = _unitOfWorkManager.Begin(requiresNew: true, isTransactional: false))
{
await _geoPointRepository.InsertManyAsync(geoPoints.Skip(skip).Take(chunkSize), autoSave: false);
await uow.CompleteAsync();
}
}
dbContext.ChangeTracker.AutoDetectChangesEnabled = true;
}
I don't know if this approach makes sense - I also tried a different approach, using DbContext
, prior to that set BatchSize
to 100 for our database:
var dbContext = await _geoPointRepository.GetDbContextAsync();
((MyDbContext)dbContext).AddRange(geoPoints);
await dbContext.SaveChangesAsync();
No difference. It is still very slow. There is no third-party tools for bulk operations for Oracle, so this is not an option in any case. Besides, according to some article online, the AddRange
approach on 100 entries per DB round-trip is only 1.5 times slower than using bulk insert on such amount of data.
Another thing - I am not sure that ABP logging makes the insertion much slower, but I was unable to turn off these things:
despite applying [DisableAuditing]
attribute to Controller or AppService method. Please note, that I don't want to use IgnoreUrls
, since the Url might change in future - only the method body is relevant. I also don't want to apply [DisableAuditing]
to the entity, because in some other methods I DO want the logging as usual.
What puzzles me is: >
Executed DbCommand (XXXms)
all the DbCommand take more or less appropriate amount of time (each DbCommand takes less than 1 second).
But look at this:
Executed action YYY.Controllers.GeoPoints.GeoPointController.UploadAsync (YYY.HttpApi) in 53197.6736ms
I can't explain this. I turned off the auditing by putting the attribute [DisableAuditing]
on the entity class.The time for all the operations does not sum up. Why the total time is so huge, what most of this total time is spent for?? It is somehow related to the fact I'm inserting many records. Because when I do some elementary DB operation - the API request time is normal.