SQL For Update Skip Locked Query and Java Multi Threading - How to fix this

Go To StackoverFlow.com

2

SELECT 
        id
        FROM table_name tkn1,
        (SELECT 
            id, 
            ROWNUM rnum
         FROM table_name 
         WHERE 
            PROCS_DT is null
         order by PRTY desc, CRET_DT) result 
        WHERE  tkn1.id= result.id
        AND result.rnum <= 10 FOR UPDATE OF tkn1.id SKIP LOCKED

Here is my problem. 2 threads are accessing this query at the same time

Thread 1 - Executes select and locks 10 rows ordered by descending priority and created date. Next I would update the procs_dt as todays date from a separate query..

Thread 2 - Before update of procs_dt or commit happens from thread 1 , this thread executes this query. My requirement is that the next 10 unlocked rows must be handed over to thread 2. But what really happens is the same set of locked rows comes out of the inner query since procs_dt is still null and yet to be updated by thread 1 and since skip locked is given in the outer query, all those 10 rows are skipped and no records are returned for thread 2 to process

This ultimately defeats my multi threading requirement.

How to fix this query? I tried adding the skip locked to the inner query. But oracle 11g doesn allow it.

Experts please help. I am using oracle 11g

2011-07-21 01:13
by Shiva
You have told us what you have done technically and what doesn't work. Can you please also specify what the functional requirements are? Maybe we can come up with a totally different approach - Rob van Wijk 2011-07-21 11:06
I have 2 nodes where 2 instances of same batch application runs. I have one database schema which needs to be shared by both those instances. 2 batch instances needs to query a table at the same time and process those records half/half. if the table has 10k records i am expecting 5k records to be processed by each node. I am using spring batch framework, oracle 11g and doing chunk oriented processing. i.e. the nodes would process 10 records at a time and then fetch the next chunk of 10 records and it goes on until there are no more rows to process. I update process date as record is processe - Shiva 2011-07-21 12:04


5

I'd go with something like this : A cursor to select the rows in order for update, and use the LIMIT clause to get the first ten available.

create table gm_temp
as select rownum id, table_name obj_name, date '2011-01-01' + rownum create_date 
from all_tables where rownum < 500;

CREATE TYPE tab_number IS TABLE OF NUMBER;

DECLARE
  cursor c_table IS 
    SELECT id FROM gm_temp ORDER BY create_date DESC FOR UPDATE OF id SKIP LOCKED;
  t_table_src tab_number := tab_number();
BEGIN
  OPEN c_table;
  FETCH c_table BULK COLLECT INTO t_table_src LIMIT 10;
  CLOSE c_table;
  dbms_output.put_line(':'||t_table_src.count||':'||t_table_src(1));
END;

Actually, I'd firstly see whether processing ALL outstanding rows as a set would be better than multi-threading.

Then if I did decide that I needed some form of multi-threading, I'd look at pipelined functions with parallel enabled (assuming I was on Enterprise Edition).

2011-07-21 02:23
by Gary Myers


1

just remove result.rnum <= 10 from your where condition. skip locked mechanism actually works in a different way. even if you remove rownum from your query, it doesnt lock all rows, but "fetched" rows. So instead of using rownum=10, just set fetchSize to 10. it will be doing what you need.

What i am not sure is how is the performance using skip locked mechanism with many records in hand. I have implemented another solution to this kind of problem; locking a virtual table with specific types to allow only one thread/application will be able to query the records that have the specfic type.

Let me know the skip locked performance if possible, thx.

2014-07-02 17:03
by user1934537


0

For your solution you can still order your records but if you do not have to update records exactly with this order i mean it can be close to 98 percent ordered or smt like this according to how big your records. You may try below solution select your records little randomly

Solution:

Below query is useful but that way if your table is large, I just tried and see that you definitely face performance problem with this query.

SELECT * FROM ( SELECT * FROM table ORDER BY dbms_random.value ) WHERE rownum = 1

So if you set a rownum like below then you can work around the performance problem. By incrementing rownum you can reduce the possiblities.

SELECT * FROM (SELECT id FROM table_name tkn1, (SELECT id, ROWNUM rnum FROM table_name WHERE PROCS_DT is null order by PRTY desc, CRET_DT) result WHERE tkn1.id= result.id AND result.rnum <= 1000 ORDER BY dbms_random.value) WHERE rownum <= 10

after updating the rows after selecting them, If you can not update that means another transaction has already used it. Then You should try to get a new row and update its status. By the way, getting the same row by two different transaction possibility is 0.001 since rownum is 1000.

2013-07-04 08:43
by fatih tekin
Ads