i965: perf: minimize the chances to spread queries across batchbuffers
Counter related to timings will be sensitive to any delay introduced by the software. In particular if our begin & end of performance queries end up in different batches, time related counters will exhibit biffer values caused by the time it takes for the kernel driver to load new requests into the hardware. Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Acked-by: Kenneth Graunke <kenneth@whitecape.org>
This commit is contained in:
parent
7ee409dd4e
commit
adafe4b733
|
@ -1063,6 +1063,14 @@ brw_end_perf_query(struct gl_context *ctx,
|
|||
obj->oa.begin_report_id + 1);
|
||||
}
|
||||
|
||||
/* We flush the batchbuffer here to minimize the chances that MI_RPC
|
||||
* delimiting commands end up in different batchbuffers. If that's the
|
||||
* case, the measurement will include the time it takes for the kernel
|
||||
* scheduler to load a new request into the hardware. This is manifested
|
||||
* in tools like frameretrace by spikes in the "GPU Core Clocks"
|
||||
* counter.
|
||||
*/
|
||||
intel_batchbuffer_flush(brw);
|
||||
--brw->perfquery.n_active_oa_queries;
|
||||
|
||||
/* NB: even though the query has now ended, it can't be accumulated
|
||||
|
|
Loading…
Reference in New Issue