本篇內容介紹了“PostgreSQL的ExecHashJoin依賴其他函數的實現邏輯是什么”的有關知識,在實際案例的操作過程中,不少人都會遇到這樣的困境,接下來就讓小編帶領大家學習一下如何處理這些情況吧!希望大家仔細閱讀,能夠學有所成!
在魏縣等地區(qū),都構建了全面的區(qū)域性戰(zhàn)略布局,加強發(fā)展的系統性、市場前瞻性、產品創(chuàng)新能力,以專注、極致的服務理念,為客戶提供成都做網站、網站設計、外貿營銷網站建設 網站設計制作定制設計,公司網站建設,企業(yè)網站建設,品牌網站建設,全網整合營銷推廣,外貿營銷網站建設,魏縣網站建設費用合理。
JoinState
Hash/NestLoop/Merge Join的基類
/* ---------------- * JoinState information * * Superclass for state nodes of join plans. * Hash/NestLoop/Merge Join的基類 * ---------------- */ typedef struct JoinState { PlanState ps;//基類PlanState JoinType jointype;//連接類型 //在找到一個匹配inner tuple的時候,如需要跳轉到下一個outer tuple,則該值為T bool single_match; /* True if we should skip to next outer tuple * after finding one inner match */ //連接條件表達式(除了ps.qual) ExprState *joinqual; /* JOIN quals (in addition to ps.qual) */ } JoinState;
HashJoinState
Hash Join運行期狀態(tài)結構體
/* these structs are defined in executor/hashjoin.h: */ typedef struct HashJoinTupleData *HashJoinTuple; typedef struct HashJoinTableData *HashJoinTable; typedef struct HashJoinState { JoinState js; /* 基類;its first field is NodeTag */ ExprState *hashclauses;//hash連接條件 List *hj_OuterHashKeys; /* 外表條件鏈表;list of ExprState nodes */ List *hj_InnerHashKeys; /* 內表連接條件;list of ExprState nodes */ List *hj_HashOperators; /* 操作符OIDs鏈表;list of operator OIDs */ HashJoinTable hj_HashTable;//Hash表 uint32 hj_CurHashValue;//當前的Hash值 int hj_CurBucketNo;//當前的bucket編號 int hj_CurSkewBucketNo;//行傾斜bucket編號 HashJoinTuple hj_CurTuple;//當前元組 TupleTableSlot *hj_OuterTupleSlot;//outer relation slot TupleTableSlot *hj_HashTupleSlot;//Hash tuple slot TupleTableSlot *hj_NullOuterTupleSlot;//用于外連接的outer虛擬slot TupleTableSlot *hj_NullInnerTupleSlot;//用于外連接的inner虛擬slot TupleTableSlot *hj_FirstOuterTupleSlot;// int hj_JoinState;//JoinState狀態(tài) bool hj_MatchedOuter;//是否匹配 bool hj_OuterNotEmpty;//outer relation是否為空 } HashJoinState;
HashJoinTable
Hash表數據結構
typedef struct HashJoinTableData { int nbuckets; /* 內存中的hash桶數;# buckets in the in-memory hash table */ int log2_nbuckets; /* 2的對數(nbuckets必須是2的冪);its log2 (nbuckets must be a power of 2) */ int nbuckets_original; /* 首次hash時的桶數;# buckets when starting the first hash */ int nbuckets_optimal; /* 優(yōu)化后的桶數(每個批次);optimal # buckets (per batch) */ int log2_nbuckets_optimal; /* 2的對數;log2(nbuckets_optimal) */ /* buckets[i] is head of list of tuples in i'th in-memory bucket */ //bucket [i]是內存中第i個桶中的元組鏈表的head item union { /* unshared array is per-batch storage, as are all the tuples */ //未共享數組是按批處理存儲的,所有元組均如此 struct HashJoinTupleData **unshared; /* shared array is per-query DSA area, as are all the tuples */ //共享數組是每個查詢的DSA區(qū)域,所有元組均如此 dsa_pointer_atomic *shared; } buckets; bool keepNulls; /*如不匹配則存儲NULL元組,該值為T;true to store unmatchable NULL tuples */ bool skewEnabled; /*是否使用傾斜優(yōu)化?;are we using skew optimization? */ HashSkewBucket **skewBucket; /* 傾斜的hash表桶數;hashtable of skew buckets */ int skewBucketLen; /* skewBucket數組大小;size of skewBucket array (a power of 2!) */ int nSkewBuckets; /* 活動的傾斜桶數;number of active skew buckets */ int *skewBucketNums; /* 活動傾斜桶數組索引;array indexes of active skew buckets */ int nbatch; /* 批次數;number of batches */ int curbatch; /* 當前批次,第一輪為0;current batch #; 0 during 1st pass */ int nbatch_original; /* 在開始inner掃描時的批次;nbatch when we started inner scan */ int nbatch_outstart; /* 在開始outer掃描時的批次;nbatch when we started outer scan */ bool growEnabled; /* 關閉nbatch增加的標記;flag to shut off nbatch increases */ double totalTuples; /* 從inner plan獲得的元組數;# tuples obtained from inner plan */ double partialTuples; /* 通過hashjoin獲得的inner元組數;# tuples obtained from inner plan by me */ double skewTuples; /* 傾斜元組數;# tuples inserted into skew tuples */ /* * These arrays are allocated for the life of the hash join, but only if * nbatch > 1. A file is opened only when we first write a tuple into it * (otherwise its pointer remains NULL). Note that the zero'th array * elements never get used, since we will process rather than dump out any * tuples of batch zero. * 這些數組在散列連接的生命周期內分配,但僅當nbatch > 1時分配。 * 只有當第一次將元組寫入文件時,文件才會打開(否則它的指針將保持NULL)。 * 注意,第0個數組元素永遠不會被使用,因為批次0的元組永遠不會轉儲. */ BufFile **innerBatchFile; /* 每個批次的inner虛擬臨時文件緩存;buffered virtual temp file per batch */ BufFile **outerBatchFile; /* 每個批次的outer虛擬臨時文件緩存;buffered virtual temp file per batch */ /* * Info about the datatype-specific hash functions for the datatypes being * hashed. These are arrays of the same length as the number of hash join * clauses (hash keys). * 有關正在散列的數據類型的特定于數據類型的散列函數的信息。 * 這些數組的長度與散列連接子句(散列鍵)的數量相同。 */ FmgrInfo *outer_hashfunctions; /* outer hash函數FmgrInfo結構體;lookup data for hash functions */ FmgrInfo *inner_hashfunctions; /* inner hash函數FmgrInfo結構體;lookup data for hash functions */ bool *hashStrict; /* 每個hash操作符是嚴格?is each hash join operator strict? */ Size spaceUsed; /* 元組使用的當前內存空間大小;memory space currently used by tuples */ Size spaceAllowed; /* 空間使用上限;upper limit for space used */ Size spacePeak; /* 峰值的空間使用;peak space used */ Size spaceUsedSkew; /* 傾斜哈希表的當前空間使用情況;skew hash table's current space usage */ Size spaceAllowedSkew; /* 傾斜哈希表的使用上限;upper limit for skew hashtable */ MemoryContext hashCxt; /* 整個散列連接存儲的上下文;context for whole-hash-join storage */ MemoryContext batchCxt; /* 該批次存儲的上下文;context for this-batch-only storage */ /* used for dense allocation of tuples (into linked chunks) */ //用于密集分配元組(到鏈接塊中) HashMemoryChunk chunks; /* 整個批次使用一個鏈表;one list for the whole batch */ /* Shared and private state for Parallel Hash. */ //并行hash使用的共享和私有狀態(tài) HashMemoryChunk current_chunk; /* 后臺進程的當前chunk;this backend's current chunk */ dsa_area *area; /* 用于分配內存的DSA區(qū)域;DSA area to allocate memory from */ ParallelHashJoinState *parallel_state;//并行執(zhí)行狀態(tài) ParallelHashJoinBatchAccessor *batches;//并行訪問器 dsa_pointer current_chunk_shared;//當前chunk的開始指針 } HashJoinTableData; typedef struct HashJoinTableData *HashJoinTable;
HashJoinTupleData
Hash連接元組數據
/* ---------------------------------------------------------------- * hash-join hash table structures * * Each active hashjoin has a HashJoinTable control block, which is * palloc'd in the executor's per-query context. All other storage needed * for the hashjoin is kept in private memory contexts, two for each hashjoin. * This makes it easy and fast to release the storage when we don't need it * anymore. (Exception: data associated with the temp files lives in the * per-query context too, since we always call buffile.c in that context.) * 每個活動的hashjoin都有一個可散列的控制塊,它在執(zhí)行程序的每個查詢上下文中都是通過palloc分配的。 * hashjoin所需的所有其他存儲都保存在私有內存上下文中,每個hashjoin有兩個。 * 當不再需要它的時候,這使得釋放它變得簡單和快速。 * (例外:與臨時文件相關的數據也存在于每個查詢上下文中,因為在這種情況下總是調用buffile.c。) * * The hashtable contexts are made children of the per-query context, ensuring * that they will be discarded at end of statement even if the join is * aborted early by an error. (Likewise, any temporary files we make will * be cleaned up by the virtual file manager in event of an error.) * hashtable上下文是每個查詢上下文的子上下文,確保在語句結束時丟棄它們,即使連接因錯誤而提前中止。 * (同樣,如果出現錯誤,虛擬文件管理器將清理創(chuàng)建的任何臨時文件。) * * Storage that should live through the entire join is allocated from the * "hashCxt", while storage that is only wanted for the current batch is * allocated in the "batchCxt". By resetting the batchCxt at the end of * each batch, we free all the per-batch storage reliably and without tedium. * 通過整個連接的存儲空間應從“hashCxt”分配,而只需要當前批處理的存儲空間在“batchCxt”中分配。 * 通過在每個批處理結束時重置batchCxt,可以可靠地釋放每個批處理的所有存儲,而不會感到單調乏味。 * * During first scan of inner relation, we get its tuples from executor. * If nbatch > 1 then tuples that don't belong in first batch get saved * into inner-batch temp files. The same statements apply for the * first scan of the outer relation, except we write tuples to outer-batch * temp files. After finishing the first scan, we do the following for * each remaining batch: * 1. Read tuples from inner batch file, load into hash buckets. * 2. Read tuples from outer batch file, match to hash buckets and output. * 在內部關系的第一次掃描中,從執(zhí)行者那里得到了它的元組。 * 如果nbatch > 1,那么不屬于第一批的元組將保存到批內臨時文件中。 * 相同的語句適用于外關系的第一次掃描,但是我們將元組寫入外部批處理臨時文件。 * 完成第一次掃描后,我們對每批剩余的元組做如下處理: * 1.從內部批處理文件讀取元組,加載到散列桶中。 * 2.從外部批處理文件讀取元組,匹配哈希桶和輸出。 * * It is possible to increase nbatch on the fly if the in-memory hash table * gets too big. The hash-value-to-batch computation is arranged so that this * can only cause a tuple to go into a later batch than previously thought, * never into an earlier batch. When we increase nbatch, we rescan the hash * table and dump out any tuples that are now of a later batch to the correct * inner batch file. Subsequently, while reading either inner or outer batch * files, we might find tuples that no longer belong to the current batch; * if so, we just dump them out to the correct batch file. * 如果內存中的哈希表太大,可以動態(tài)增加nbatch。 * 散列值到批處理的計算是這樣安排的: * 這只會導致元組進入比以前認為的更晚的批處理,而不會進入更早的批處理。 * 當增加nbatch時,重新掃描哈希表,并將現在屬于后面批處理的任何元組轉儲到正確的內部批處理文件。 * 隨后,在讀取內部或外部批處理文件時,可能會發(fā)現不再屬于當前批處理的元組; * 如果是這樣,只需將它們轉儲到正確的批處理文件即可。 * ---------------------------------------------------------------- */ /* these are in nodes/execnodes.h: */ /* typedef struct HashJoinTupleData *HashJoinTuple; */ /* typedef struct HashJoinTableData *HashJoinTable; */ typedef struct HashJoinTupleData { /* link to next tuple in same bucket */ //link同一個桶中的下一個元組 union { struct HashJoinTupleData *unshared; dsa_pointer shared; } next; uint32 hashvalue; /* 元組的hash值;tuple's hash code */ /* Tuple data, in MinimalTuple format, follows on a MAXALIGN boundary */ } HashJoinTupleData; #define HJTUPLE_OVERHEAD MAXALIGN(sizeof(HashJoinTupleData)) #define HJTUPLE_MINTUPLE(hjtup) \ ((MinimalTuple) ((char *) (hjtup) + HJTUPLE_OVERHEAD))
ExecScanHashBucket
搜索匹配當前outer relation tuple的hash桶,尋找匹配的inner relation元組。
/*---------------------------------------------------------------------------------------------------- HJ_SCAN_BUCKET 階段 ----------------------------------------------------------------------------------------------------*/ /* * ExecScanHashBucket * scan a hash bucket for matches to the current outer tuple * 搜索匹配當前outer relation tuple的hash桶 * * The current outer tuple must be stored in econtext->ecxt_outertuple. * 當前的outer relation tuple必須存儲在econtext->ecxt_outertuple中 * * On success, the inner tuple is stored into hjstate->hj_CurTuple and * econtext->ecxt_innertuple, using hjstate->hj_HashTupleSlot as the slot * for the latter. * 成功后,內部元組存儲到hjstate->hj_CurTuple和econtext->ecxt_innertuple中, * 使用hjstate->hj_HashTupleSlot作為后者的slot。 */ bool ExecScanHashBucket(HashJoinState *hjstate, ExprContext *econtext) { ExprState *hjclauses = hjstate->hashclauses;//hash連接條件表達式 HashJoinTable hashtable = hjstate->hj_HashTable;//Hash表 HashJoinTuple hashTuple = hjstate->hj_CurTuple;//當前的Tuple uint32 hashvalue = hjstate->hj_CurHashValue;//hash值 /* * hj_CurTuple is the address of the tuple last returned from the current * bucket, or NULL if it's time to start scanning a new bucket. * hj_CurTuple是最近從當前桶返回的元組的地址,如果需要開始掃描新桶,則為NULL。 * * If the tuple hashed to a skew bucket then scan the skew bucket * otherwise scan the standard hashtable bucket. * 如果元組散列到傾斜桶,則掃描傾斜桶,否則掃描標準哈希表桶。 */ if (hashTuple != NULL) hashTuple = hashTuple->next.unshared;//hashTuple,通過指針獲取下一個 else if (hjstate->hj_CurSkewBucketNo != INVALID_SKEW_BUCKET_NO) //如為NULL,而且使用傾斜優(yōu)化,則從傾斜桶中獲取 hashTuple = hashtable->skewBucket[hjstate->hj_CurSkewBucketNo]->tuples; else ////如為NULL,不使用傾斜優(yōu)化,從常規(guī)的bucket中獲取 hashTuple = hashtable->buckets.unshared[hjstate->hj_CurBucketNo]; while (hashTuple != NULL)//循環(huán) { if (hashTuple->hashvalue == hashvalue)//hash值一致 { TupleTableSlot *inntuple;//inner tuple /* insert hashtable's tuple into exec slot so ExecQual sees it */ //把Hash表中的tuple插入到執(zhí)行器的slot中,作為函數ExecQual的輸入使用 inntuple = ExecStoreMinimalTuple(HJTUPLE_MINTUPLE(hashTuple), hjstate->hj_HashTupleSlot, false); /* do not pfree */ econtext->ecxt_innertuple = inntuple;//賦值 if (ExecQualAndReset(hjclauses, econtext))//判斷連接條件是否滿足 { hjstate->hj_CurTuple = hashTuple;//滿足,則賦值&返回T return true; } } hashTuple = hashTuple->next.unshared;//從Hash表中獲取下一個tuple } /* * no match * 不匹配,返回F */ return false; } /* * Store a minimal tuple into TTSOpsMinimalTuple type slot. * 存儲最小化的tuple到TTSOpsMinimalTuple類型的slot中 * * If the target slot is not guaranteed to be TTSOpsMinimalTuple type slot, * use the, more expensive, ExecForceStoreMinimalTuple(). * 如果目標slot不能確保是TTSOpsMinimalTuple類型,使用代價更高的ExecForceStoreMinimalTuple()函數 */ TupleTableSlot * ExecStoreMinimalTuple(MinimalTuple mtup, TupleTableSlot *slot, bool shouldFree) { /* * sanity checks * 安全檢查 */ Assert(mtup != NULL); Assert(slot != NULL); Assert(slot->tts_tupleDescriptor != NULL); if (unlikely(!TTS_IS_MINIMALTUPLE(slot)))//類型檢查 elog(ERROR, "trying to store a minimal tuple into wrong type of slot"); tts_minimal_store_tuple(slot, mtup, shouldFree);//存儲 return slot;//返回slot } static void tts_minimal_store_tuple(TupleTableSlot *slot, MinimalTuple mtup, bool shouldFree) { MinimalTupleTableSlot *mslot = (MinimalTupleTableSlot *) slot;//獲取slot tts_minimal_clear(slot);//清除原來的slot //安全檢查 Assert(!TTS_SHOULDFREE(slot)); Assert(TTS_EMPTY(slot)); //設置slot信息 slot->tts_flags &= ~TTS_FLAG_EMPTY; slot->tts_nvalid = 0; mslot->off = 0; //存儲到mslot中 mslot->mintuple = mtup; Assert(mslot->tuple == &mslot->minhdr); mslot->minhdr.t_len = mtup->t_len + MINIMAL_TUPLE_OFFSET; mslot->minhdr.t_data = (HeapTupleHeader) ((char *) mtup - MINIMAL_TUPLE_OFFSET); /* no need to set t_self or t_tableOid since we won't allow access */ //不需要設置t_sefl或者t_tableOid,因為不允許訪問 if (shouldFree) slot->tts_flags |= TTS_FLAG_SHOULDFREE; else Assert(!TTS_SHOULDFREE(slot)); } /* * ExecQualAndReset() - evaluate qual with ExecQual() and reset expression * context. * ExecQualAndReset() - 使用ExecQual()解析并重置表達式 */ #ifndef FRONTEND static inline bool ExecQualAndReset(ExprState *state, ExprContext *econtext) { bool ret = ExecQual(state, econtext);//調用ExecQual /* inline ResetExprContext, to avoid ordering issue in this file */ //內聯ResetExprContext,避免在這個文件中的ordering MemoryContextReset(econtext->ecxt_per_tuple_memory); return ret; } #endif #define HeapTupleHeaderSetMatch(tup) \ ( \ (tup)->t_infomask2 |= HEAP_TUPLE_HAS_MATCH \ )
測試腳本如下
testdb=# set enable_nestloop=false; SET testdb=# set enable_mergejoin=false; SET testdb=# explain verbose select dw.*,grjf.grbh,grjf.xm,grjf.ny,grjf.je testdb-# from t_dwxx dw,lateral (select gr.grbh,gr.xm,jf.ny,jf.je testdb(# from t_grxx gr inner join t_jfxx jf testdb(# on gr.dwbh = dw.dwbh testdb(# and gr.grbh = jf.grbh) grjf testdb-# order by dw.dwbh; QUERY PLAN ----------------------------------------------------------------------------------------------- Sort (cost=14828.83..15078.46 rows=99850 width=47) Output: dw.dwmc, dw.dwbh, dw.dwdz, gr.grbh, gr.xm, jf.ny, jf.je Sort Key: dw.dwbh -> Hash Join (cost=3176.00..6537.55 rows=99850 width=47) Output: dw.dwmc, dw.dwbh, dw.dwdz, gr.grbh, gr.xm, jf.ny, jf.je Hash Cond: ((gr.grbh)::text = (jf.grbh)::text) -> Hash Join (cost=289.00..2277.61 rows=99850 width=32) Output: dw.dwmc, dw.dwbh, dw.dwdz, gr.grbh, gr.xm Inner Unique: true Hash Cond: ((gr.dwbh)::text = (dw.dwbh)::text) -> Seq Scan on public.t_grxx gr (cost=0.00..1726.00 rows=100000 width=16) Output: gr.dwbh, gr.grbh, gr.xm, gr.xb, gr.nl -> Hash (cost=164.00..164.00 rows=10000 width=20) Output: dw.dwmc, dw.dwbh, dw.dwdz -> Seq Scan on public.t_dwxx dw (cost=0.00..164.00 rows=10000 width=20) Output: dw.dwmc, dw.dwbh, dw.dwdz -> Hash (cost=1637.00..1637.00 rows=100000 width=20) Output: jf.ny, jf.je, jf.grbh -> Seq Scan on public.t_jfxx jf (cost=0.00..1637.00 rows=100000 width=20) Output: jf.ny, jf.je, jf.grbh (20 rows)
啟動gdb,設置斷點
(gdb) b ExecScanHashBucket Breakpoint 1 at 0x6ff25b: file nodeHash.c, line 1910. (gdb) c Continuing. Breakpoint 1, ExecScanHashBucket (hjstate=0x2bb8738, econtext=0x2bb8950) at nodeHash.c:1910 1910 ExprState *hjclauses = hjstate->hashclauses;
設置相關變量
1910 ExprState *hjclauses = hjstate->hashclauses; (gdb) n 1911 HashJoinTable hashtable = hjstate->hj_HashTable; (gdb) 1912 HashJoinTuple hashTuple = hjstate->hj_CurTuple; (gdb) 1913 uint32 hashvalue = hjstate->hj_CurHashValue; (gdb) 1922 if (hashTuple != NULL)
hash join連接條件
(gdb) p *hjclauses $1 = {tag = {type = T_ExprState}, flags = 7 '\a', resnull = false, resvalue = 0, resultslot = 0x0, steps = 0x2bc4bc8, evalfunc = 0x6d1a6e, expr = 0x2bb60c0, evalfunc_private = 0x6cf625 , steps_len = 7, steps_alloc = 16, parent = 0x2bb8738, ext_params = 0x0, innermost_caseval = 0x0, innermost_casenull = 0x0, innermost_domainval = 0x0, innermost_domainnull = 0x0}
hash表
(gdb) p hashtable $2 = (HashJoinTable) 0x2bc9de8 (gdb) p *hashtable $3 = {nbuckets = 16384, log2_nbuckets = 14, nbuckets_original = 16384, nbuckets_optimal = 16384, log2_nbuckets_optimal = 14, buckets = {unshared = 0x7f0fc1345050, shared = 0x7f0fc1345050}, keepNulls = false, skewEnabled = false, skewBucket = 0x0, skewBucketLen = 0, nSkewBuckets = 0, skewBucketNums = 0x0, nbatch = 1, curbatch = 0, nbatch_original = 1, nbatch_outstart = 1, growEnabled = true, totalTuples = 10000, partialTuples = 10000, skewTuples = 0, innerBatchFile = 0x0, outerBatchFile = 0x0, outer_hashfunctions = 0x2bdc228, inner_hashfunctions = 0x2bdc280, hashStrict = 0x2bdc2d8, spaceUsed = 677754, spaceAllowed = 16777216, spacePeak = 677754, spaceUsedSkew = 0, spaceAllowedSkew = 335544, hashCxt = 0x2bdc110, batchCxt = 0x2bde120, chunks = 0x2c708f0, current_chunk = 0x0, area = 0x0, parallel_state = 0x0, batches = 0x0, current_chunk_shared = 0}
hash桶中的元組&hash值
(gdb) p *hashTuple Cannot access memory at address 0x0 (gdb) p hashvalue $4 = 2324234220 (gdb)
從常規(guī)hash桶中獲取hash元組
(gdb) n 1924 else if (hjstate->hj_CurSkewBucketNo != INVALID_SKEW_BUCKET_NO) (gdb) p hjstate->hj_CurSkewBucketNo $5 = -1 (gdb) n 1927 hashTuple = hashtable->buckets.unshared[hjstate->hj_CurBucketNo]; (gdb) 1929 while (hashTuple != NULL) (gdb) p hjstate->hj_CurBucketNo $7 = 16364 (gdb) p *hashTuple $6 = {next = {unshared = 0x0, shared = 0}, hashvalue = 1822113772}
判斷hash值是否一致
(gdb) n 1931 if (hashTuple->hashvalue == hashvalue) (gdb) p hashTuple->hashvalue $8 = 1822113772 (gdb) p hashvalue $9 = 2324234220 (gdb)
不一致,繼續(xù)下一個元組
(gdb) n 1948 hashTuple = hashTuple->next.unshared; (gdb) 1929 while (hashTuple != NULL)
下一個元組為NULL,返回F,說明沒有匹配的元組
(gdb) p *hashTuple Cannot access memory at address 0x0 (gdb) n 1954 return false;
在ExecStoreMinimalTuple上設置斷點(這時候Hash值是一致的)
(gdb) b ExecStoreMinimalTuple Breakpoint 2 at 0x6e8cbf: file execTuples.c, line 427. (gdb) c Continuing. Breakpoint 1, ExecScanHashBucket (hjstate=0x2bb8738, econtext=0x2bb8950) at nodeHash.c:1910 1910 ExprState *hjclauses = hjstate->hashclauses; (gdb) del 1 (gdb) c Continuing. Breakpoint 2, ExecStoreMinimalTuple (mtup=0x2be81b0, slot=0x2bb9c18, shouldFree=false) at execTuples.c:427 427 Assert(mtup != NULL); (gdb) finish Run till exit from #0 ExecStoreMinimalTuple (mtup=0x2be81b0, slot=0x2bb9c18, shouldFree=false) at execTuples.c:427 0x00000000006ff335 in ExecScanHashBucket (hjstate=0x2bb8738, econtext=0x2bb8950) at nodeHash.c:1936 1936 inntuple = ExecStoreMinimalTuple(HJTUPLE_MINTUPLE(hashTuple), Value returned is $10 = (TupleTableSlot *) 0x2bb9c18 (gdb) n 1939 econtext->ecxt_innertuple = inntuple;
匹配成功,返回T
(gdb) n 1941 if (ExecQualAndReset(hjclauses, econtext)) (gdb) 1943 hjstate->hj_CurTuple = hashTuple; (gdb) 1944 return true; (gdb) 1955 } (gdb)
HJ_SCAN_BUCKET階段,實現的邏輯是掃描Hash桶,尋找inner relation中與outer relation元組匹配的元組,如匹配,則把匹配的Tuple存儲在hjstate->hj_CurTuple中.
“PostgreSQL的ExecHashJoin依賴其他函數的實現邏輯是什么”的內容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業(yè)相關的知識可以關注創(chuàng)新互聯網站,小編將為大家輸出更多高質量的實用文章!