Hi,
Could you provide the full steps or share a project to reproduce the problem? I will check it. my email is shiwei.liang@volosoft.com
How about disable Abp's auto route binding for a MicroService?
Thanks. So our controller code should be fine. Need to investigate why the second call failed, the code was working without problem before.
Hi
If several messages published to the event bus in a short period, would the event handling process all the messages without loosing any message when doesn't use the Outbox? Please read more about RabbitMQ Reliability https://www.rabbitmq.com/reliability.html
Hi All the API instances will listen to those message but they have to be configure on the same channel https://github.com/abpframework/abp-samples/tree/master/RabbitMqEventBus You can try this easy sample
you can read https://docs.abp.io/en/abp/latest/Distributed-Event-Bus#outbox-inbox-for-transactional-events more about When should use outbox when shouldn't use outbox?
For item2 I means this page: https://docs.abp.io/en/abp/latest/Distributed-Event-Bus Pre-requirements The outbox/inbox system uses the distributed lock system to handle concurrency when you run multiple instances of your application/service. So, you should configure the distributed lock system with one of the providers as explained in this document. The outbox/inbox system supports Entity Framework Core (EF Core) and MongoDB database providers out of the box. So, your applications should use one of these database providers. For other database providers, see the Implementing a Custom Database Provider section.
We do have Multiple instances of microservices, what does configure 'distributed lock system' means?
hi
https://docs.abp.io/en/abp/latest/Unit-Of-Work#conventions
Finally figure out the reason, after deploy the code to Azure kubernetes, NgInx has two settings default to 60 seconds, need to change the value there https://ubiq.co/tech-blog/increase-request-timeout-nginx/
Thanks, Domina
hi
https://docs.abp.io/en/abp/latest/Unit-Of-Work#conventions
I use a timer, the API call is time out at 60 seconds. However, according to Ocelot document, the default gateway time out is 90 seconds. So where this 60 seconds time out come from? Unit of work?
hi
https://docs.abp.io/en/abp/latest/Unit-Of-Work#conventions
I read the document. Disable unit of work of entrance to avoid long file operations make Db transactions too long, otherwise, more chance of deadlock for database transactions.
Is there a way to dump data to database immediately, I add true flag to each of Insert/Update method call. But seems it doesn't work. And why gateway thinks the first operation failed and try the second time to make the same call?
hi
There are some
timeout
settings that you can use in your MongoDB connection string.https://www.mongodb.com/docs/manual/reference/connection-string/
I don't have long transaction, the entrance API the UnitOfWork is disabled. with the process, any place needs Db access, I create a new unit of work with new transaction.
I found a very strange behavior, in every repository Insert/Update call, I give auto save parameter true and wrap with an unit of work, however, it until around 20 seconds later after API is done, then the new data would shown in Db, I can see number of records increasing via MongoDb. But UI subscribe to the API method already done and show an intermediate status of progress.
I have a question, if the entrance App Service method has [UnitOfWork] disabled, do I need to add this to every app service methods for the whole stack?
There is also Blob code called, I see exception below, though the Save method is await _container.SaveAsync(fileNamePath, bytes, true);
Azure.RequestFailedException: The specified container already exists. RequestId:c84c6932-d01e-0065-7d95-ea64f5000000 Time:2023-09-19T01:06:57.2040621Z Status: 409 (The specified container already exists.) ErrorCode: ContainerAlreadyExists
Content: <?xml version="1.0" encoding="utf-8"?><Error><Code>ContainerAlreadyExists</Code><Message>The specified container already exists. RequestId:c84c6932-d01e-0065-7d95-ea64f5000000 Time:2023-09-19T01:06:57.2040621Z</Message></Error>
Headers: Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0 x-ms-request-id: c84c6932-d01e-0065-7d95-ea64f5000000 x-ms-client-request-id: 2a718994-11d6-4088-8cf6-5a77e3014d89 x-ms-version: 2023-01-03 x-ms-error-code: ContainerAlreadyExists Date: Tue, 19 Sep 2023 01:06:56 GMT Content-Length: 230 Content-Type: application/xml
at Azure.Storage.Blobs.ContainerRestClient.CreateAsync(Nullable1 timeout, IDictionary
2 metadata, Nullable1 access, String defaultEncryptionScope, Nullable
1 preventEncryptionScopeOverride, CancellationToken cancellationToken)
at Azure.Storage.Blobs.BlobContainerClient.CreateInternal(PublicAccessType publicAccessType, IDictionary`2 metadata, BlobContainerEncryptionScopeOptions encryptionScopeOptions, Boolean async, CancellationToken cancellationToken, String operationName)**
hi
is it because Abp gateway's auto feature?
Is your project a microservice? Otherwise, it does not have a gateway project. Do you use a reverse proxy?
Yes, it is Micro Service project. I modify settings to this: "QoSOptions": { "ExceptionsAllowedBeforeBreaking": 1, "DurationOfBreak": 1000, "TimeoutValue": 1200000 } Seems work as expected locally. But not deployed site yet. I also met a very weird null reference exception at the line, _entityAppServe.GetById (uniqueId); which _entityAppService is a App Service in another Micro Service, it means _entityAppService is null, it is not always happen, but occasionally happens, I don't understand why.
Now I suspect it is because that Request Time out settings of .Host project, I tested locally, since change the settings, has no impact. Locally testing can always process large data, but for deployed code, the application throw exception at certain amount of data. I also noticed that the request is triggered again when first request failed, is it because Abp gateway's auto feature?