You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @karashiiro , I noticed the sale history queries via API can often take a few seconds to complete and are sometimes timing out regardless of entriesToReturn or entriesWithin parameters.
I’ve checked the code and IIUC that API relies on the RetrieveBySaleTimeCore call that relies on SELECT … FROM FROM sale WHERE item_id=? AND world_id=? AND sale_time>=? ORDER BY sale_time DESC LIMIT ? query.
It might be possible to improve the resource usage and latency of this query by either indexing or partitioning the sale table: 1. If you add an index ON sale (world_id, item_id, sale_time desc), then that index would fully satisfy the WHERE/ORDER part of the above query and should make those queries near-instantaneous as the DB won’t have to scan any rows that do not satisfy the query conditions or do a sort for ORDER BY before it can start streaming the query cursor. This solution is relatively cheap in terms of code changes (i.e. just an Create.Index in Universalis.DbAccess/Migrations), but will increase your database storage usage due to the added index. 2. If you are concerned about disk usage that would be required for such an index, then you can leverage table partitioning to achieve a somewhat similar improvements. I would suggest composite partitioning scheme, e.g. LIST(world_id) HASH(item_id) RANGE(sale_time) - with monthly or weekly ranges. This should result in similar improvement for the WHERE part of the query (ORDER BY / LIMIT would still require a sort). However this solution would be much more involved in terms of the migration code and will require double the sale table size during the migration as the table would have to be re-created as PostgreSQL does not allow live partitioning of unpartitioned table.
Update: sorry, I missed that the Sales table is now stored in ScyllaDB. Would you be open to share your indexes and/or partitioning scheme on the Sales table in ScyllaDB?
The text was updated successfully, but these errors were encountered:
Hi @karashiiro , I noticed the sale history queries via API can often take a few seconds to complete and are sometimes timing out regardless of
entriesToReturn
orentriesWithin
parameters.I’ve checked the code and IIUC that API relies on the RetrieveBySaleTimeCore call that relies on
SELECT … FROM FROM sale WHERE item_id=? AND world_id=? AND sale_time>=? ORDER BY sale_time DESC LIMIT ?
query.It might be possible to improve the resource usage and latency of this query by either indexing or partitioning the:sale
table1. If you add an indexON sale (world_id, item_id, sale_time desc)
, then that index would fully satisfy theWHERE/ORDER
part of the above query and should make those queries near-instantaneous as the DB won’t have to scan any rows that do not satisfy the query conditions or do asort
forORDER BY
before it can start streaming the query cursor. This solution is relatively cheap in terms of code changes (i.e. just anCreate.Index
inUniversalis.DbAccess/Migrations
), but will increase your database storage usage due to the added index.2. If you are concerned about disk usage that would be required for such an index, then you can leverage table partitioning to achieve a somewhat similar improvements. I would suggest composite partitioning scheme, e.g.LIST(world_id) HASH(item_id) RANGE(sale_time)
- with monthly or weekly ranges. This should result in similar improvement for theWHERE
part of the query (ORDER BY / LIMIT
would still require asort
). However this solution would be much more involved in terms of the migration code and will require double thesale
table size during the migration as the table would have to be re-created as PostgreSQL does not allow live partitioning of unpartitioned table.Update: sorry, I missed that the Sales table is now stored in ScyllaDB. Would you be open to share your indexes and/or partitioning scheme on the Sales table in ScyllaDB?
The text was updated successfully, but these errors were encountered: