Olympia server deadlocks

Hi guys,

We use Olympia. I try hard to push it as an alternative solution for a lot of our services.

As a context, we have right now 34 services running connecting to Olympia on different physical computers. So when Olympia crashes, or Olympia loses connections and was not picking up their subscriptions it was sending big red lights all across our systems.

The latest version of Olympia with their capacity to recover a bit better from a restart helped a lot on building confidence. There is still work to be done on how to make Olympia redundant, but thats another story.

Right now I just want to report findings that DBAs are submitting regarding deadlocks from Olympia.

[Deadlock Victim Information]:
SPID [ecid]: 346 [4]
Host: DEFCON-05
Application: .Net SqlClient Data Provider
Database: Olympia
Login:
Log Used: 0
Deadlock Priority: 0
Wait Time: 4229
Transaction Start Time: 3/6/2020 6:54:26 AM
Last Batch Start Time: 3/6/2020 6:54:26 AM
Last Batch Completion Time: 3/6/2020 6:54:26 AM
Mode/Type: U
Status: suspended
Isolation Level: read committed (2)
Text Data:
DELETE
FROM
EVENTUSERDATA
WHERE
NOT EXISTS (
SELECT 1 FROM EVENTUSER WHERE EVENTUSER.USERID = EVENTUSERDATA.EVENTUSER
)
ECID: 4
SPID: 346
Transaction ID: 37056272524
Database ID: 13
Transaction Name: user_transaction
Lock Timeout: 4294967295
Input Buffer:
DELETE
FROM
EVENTUSERDATA
WHERE
NOT EXISTS (
SELECT 1 FROM EVENTUSER WHERE EVENTUSER.USERID = EVENTUSERDATA.EVENTUSER
)

Wait Resource: PAGE: 13:1:9026
Transaction Count: 0
ConnectedDeadlockObjects: Intercerve.SqlServer.Deadlocks.DeadlockProcess+<get_ConnectedDeadlockObjects>d__128

Got it again a few seconds later on a different sql

[Deadlock Victim Information]:
SPID [ecid]: 421 [0]
Host: DEFCON-05
Application: .Net SqlClient Data Provider
Database: Olympia
Login: HIVE\olympiaservice
Log Used: 0
Deadlock Priority: 0
Wait Time: 1717
Transaction Start Time: 3/6/2020 6:56:26 AM
Last Batch Start Time: 3/6/2020 6:56:26 AM
Last Batch Completion Time: 3/6/2020 6:56:26 AM
Mode/Type: U
Status: suspended
Isolation Level: read committed (2)
Text Data:
(@OLD_EVENTUSER nvarchar(36),@OLD_EVENTID nvarchar(36))DELETE
FROM
[EVENTUSERDATA]
WHERE
([EVENTUSER] = @OLD_EVENTUSER)
AND ([EVENTID] = @OLD_EVENTID)
at unknown line 1
(@OLD_EVENTUSER nvarchar(36),@OLD_EVENTID nvarchar(36))DELETE
FROM
[EVENTUSERDATA]
WHERE
([EVENTUSER] = @OLD_EVENTUSER)
AND ([EVENTID] = @OLD_EVENTID)
ECID: 0
SPID: 421
Transaction ID: 37056517984
Database ID: 13
Transaction Name: user_transaction
Lock Timeout: 4294967295
Input Buffer:
(@OLD_EVENTUSER nvarchar(36),@OLD_EVENTID nvarchar(36))DELETE
FROM
[EVENTUSERDATA]
WHERE
([EVENTUSER] = @OLD_EVENTUSER)
AND ([EVENTID] = @OLD_EVENTID)

Wait Resource: KEY: 13:72057594041401344 (f099055cbd15)
Transaction Count: 0
ConnectedDeadlockObjects: Intercerve.SqlServer.Deadlocks.DeadlockProcess+<get_ConnectedDeadlockObjects>d__128

Page lock level. After that things clear out. We have these things from time to time, so far, all good, but just in case is a red flag for something else brewing.

The db server is a good one, with a few dozen cores and hundreds of gbs of ram.

I’m not giving it too much thought right now to it, but just to have it in mind.

We are getting these blocks at least twice a day.

One process gets chosen to be killed by MSSQL and recovers.

Hello

How big is the EVENTUSERDATA table (number of rows / table size itself)?
You can answer via support@ or private message if you do not want to disclose this info here.

Hi,

Data Space: 1.719 MB.
VarDecimal Storage format is enabled: False
Index space: 0.094MB
Row count: 12416

Thanks, logged as bugs://84054

bugs://84054 got closed with status fixed.