Tuesday, 13 March 2018

Another process has added/updated/deleted the '{0}' record. Your changes will be lost.

Hi There,

Today want to discuss with you how what is ” Another process has added/updated/deleted” error and how to investigate/fix it.

Why it happens?
Let’s start with error understanding. Actually, this error comes from potential differences between user copy of data and actuals in database.
Before I jump into details I want to remind you few architectural points about Acumatica:
  • Acumatica commits records one by one in most cases. For several records update statement will be executed multiple times.
  • Also Acumatica stores the data needed to be save inside in-memory user specific cache. So it is possible that 2 users may have several versions of the same record with different values that is not yet saved.
  • Saving is always done in transaction.
Back to our exception - there are 2 possible reasons for this error:
  • Record was not found in database during search by keys
  • Record was modified and has changed timestamp

Search By Keys 
When Acumatica saves any record it sends UPDATE/DELETE statement to database with restriction by keys and some other fields. Keys are needed to update only exact records that requires it. That is why keys are very important for data saving.

When you update record SQL server returns you back number of records updated/deleted.
As Acumatica updates records one by one it expects that sql server returns also one record. Number of records affected will be provided by ASO.Net classes:
int rowcount = command.ExecuteNonQuery()

Our error will be thrown by Acumatica if rowcount is zero. Potential reasons for this error usually:
  • Record was really delete/changed before Acumatica wanted to commit update. For example, it may be done by another user or even your code previously.
  • Where statement was created incorrectly, and Acumatica cannot find any record for update
  • Key of the record was changed before you have committed your changes.
Acumatica uses timestamp as part of positive locking methodology of using shared records in concurrent environment. Timestamp is basically a date and time flag when this record was last changed. To do this Acumatica selects the timestamp when it reads a record and keeps it in memory. During update Acumatica adds validation that Timestamp isn’t changes since record was read.
update GLSetup set PostClosedPeriods = 0
where CompanyID = 2
       and tstamp <= 0x00000000000F6959

Each time you update record timestamp will be changed and will be bigger than previous version. In MS Sql server timestamp is special data type that is managed by SQL Server. In My SQL timestamp is simple a datetime type that keeps time of record modifications.

During insert/update/delete operations Acumatica will check timestamp and may not find the record the same way as it was described in previous part as update will return zero records. 
In this example record will not be updated if the version is changed.

The key reasons why it may happened:
  • Another user or process really have changed this record
  • You have saved this record twice in your code
  • PXGraph stores wrong or old timestamp for comparison. Graph reads and stores timestamp during data read and it may be changed manually or due to some triggers. 

How to diagnose it?
So based on the description before you understand that problem mostly happens on SQL server side, but it is validated and rises error only on servers side after query execution. So the best way to get the root reason is to check SQL statement. To find SQL statement you can use
As soon as you get query, copy it and paste to SQL Management studio. Than you need to check where statement and try to execute it. In most cases you will see the error and will be able to fix it.

If you cannot reproduce error after save, than it might be flow related. Than I recommend to add event handler to RowPersisting (that occurs right before save) and put break-point there.  Try to execute your statement as soon as code hits your break-point.
Please note that some of the data might not be committed right now as transaction is still open. To read data from transaction you can use NOLOCK directive of SQL server.
select * from NumberingSequence with (nolock)
This directive will ignore transaction isolation level and read uncommitted data as well.

Other common issues that you can check in the code:
  • Make sure you have correct keys define in DAC
  • Make sure you don't save record twice
  • If you have any database identity columns make sure that appropriate attributes defined in DAC
  • If you have master-details relationship make sure that child class has PXDBDefault attribute assigned to liked keys. This may be a problem when child record is saved with wrong key
  • If you have auto-generated key make sure key is updated properly before saving.
Have a fast troubleshooting!

No comments: