Move DP emit to it's own function. No functional change, just code
motion to prepare for splitting up const state into multiple state-
objs on a6xx.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Implement ->set_damage_region() region to support partial updates.
This is a dummy implementation in that it does not try to merge
damage rects. It also does not deal with distinct regions and instead
pick the largest quad as the only damage rect and generate up to 4
reload rects out of it (the left/right/top/bottom regions surrounding
the biggest damage rect).
We also do not try to reduce the number of draws by passing all quad
vertices to the blit request (would require extending u_blitter)
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Add a pipe_screen->set_damage_region() hook to propagate
set-damage-region requests to the driver, it's then up to the driver to
decide what to do with this piece of information.
If the hook is left unassigned, the buffer-damage extension is
considered unsupported.
Signed-off-by: Daniel Stone <daniels@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Use the DRI2 interface callback to pass the damage rects to
the driver.
Signed-off-by: Harish Krupo <harishkrupo@gmail.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Tested-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Add a new DRI2_BufferDamage interface to support the
EGL_KHR_partial_update extension, informing the driver of an overriding
scissor region for a particular drawable.
Based on a commit originally authored by:
Harish Krupo <harish.krupo.kps@intel.com>
renamed extension, retargeted at DRI drawable instead of context,
rewritten description
Signed-off-by: Daniel Stone <daniels@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Tested-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
The intension of the KHR_partial_update was not to send the damage back
to the platform but to send the damage to the driver to ensure that the
following rendering could be restricted to those regions.
This patch removes the set_damage_region from the egl_dri vtbl and all
the platfrom_*.c files.
Then upcomming patches add a new dri2 interface for the drivers to
implement
Signed-off-by: Harish Krupo <harishkrupo@gmail.com>
Reviewed-by: Daniel Stone <daniels@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Qiang Yu <yuq825@gmail.com>
Tested-by: Qiang Yu <yuq825@gmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This refactor will let us more easily use
pipe_screen::resource_get_param as an alternative to
pipe_screen::resource_get_handle.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Acked-by: Eric Anholt <eric@anholt.net>
This function retrieves individual parameters selected by enum
pipe_resource_param. It can be used as a more direct alternative to
pipe_screen::resource_get_handle.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Acked-by: Eric Anholt <eric@anholt.net>
The OpenGL ES spec requires that the value of gl_PointSize is clamped
to an implementation-dependent range matching what is advertised by
GL_ALIASED_POINT_SIZE_RANGE. For VC4 this is [1.0, 512.0], but the
hardware won't clamp to the minimum side of the range and won't render
points with a size strictly smaller than 1.0 either, so we need to
clamp manually. For points larger than the maximum size of the range
the hardware clamps automatically.
Fixes piglit test:
spec/!opengl 2.0/vs-point_size-zero
Reviewed-by: Eric Anholt <eric@anholt.net>
The OpenGL ES spec requires that the value of gl_PointSize is clamped
to an implementation-dependent range matching what is advertised by
GL_ALIASED_POINT_SIZE_RANGE. For V3D this is [1.0, 512.0], but the
hardware won't clamp to the minimum side of the range and won't render
points with a size strictly smaller than 1.0 either, so we need to
clamp manually. For points larger than the maximum size of the range
the hardware clamps automatically.
Fixes piglit test:
spec/!opengl 2.0/vs-point_size-zero
Reviewed-by: Eric Anholt <eric@anholt.net>
The OpenGL and OpenGL ES specs require that implementations clamp the
value of gl_PointSize to an implementation-depedent range. This pass
is useful for any GPU hardware that doesn't do this automatically
for either one or both sides of the range, such as V3D.
v2:
- Turn into a generic NIR pass (Eric).
- Make the pass work before lower I/O so we can use the deref variable
to inspect if we are writing to gl_PointSize (Eric).
- Make the pass take the range to clamp as parameter and allow it
to clamp to both sides of the range or just one side.
- Make the pass report progress.
v3:
- Fix copyright header (Eric)
- use fmin/fmax instead of bcsel to clamp (Eric)
Reviewed-by: Eric Anholt <eric@anholt.net>
v2:
- Fix incremental update of the const offset when we need to emit a sequence
with more than one write because of the writemask.
- Do not move the tmu write emission to a separate helper.
v3:
- Get the store writemask before the loop, use ffs to get the first component
to write and clear writemask bits as we process the components (Eric).
- Simplified the code that figured out the number of components for the TMU
config based on the number of tmu writes for stores and atomics.
v4:
- Code clean-ups (Eric).
Fixes:
KHR-GLES31.core.shader_image_load_store.advanced-cast-cs
KHR-GLES31.core.shader_image_load_store.advanced-cast-fs
KHR-GLES31.core.shader_storage_buffer_object.advanced-switchBuffers-cs
KHR-GLES31.core.shader_storage_buffer_object.advanced-switchPrograms-cs
KHR-GLES31.core.shader_storage_buffer_object.basic-operations-case1-cs
Reviewed-by: Eric Anholt <eric@anholt.net>
When we implement write masks on store operations we might need to
emit multiple write sequences for a given store intrinsic. To make
that easier, let's split the emission of the tmud instructions to
their own block after we are done with the code that only needs to
run once no matter how many write sequences we need to emit.
Reviewed-by: Eric Anholt <eric@anholt.net>
If the current job has a sequence of draw calls involving SSBOs and/or
shader images, we would flush the job in between each draw call.
With this change, we won't flush the current job and we rely on the
application inserting correct barriers by issuing glMemoryBarrier()
when needed.
v2 (Eric):
- When mapping a buffer for writing, we always need to flush.
Reviewed-by: Eric Anholt <eric@anholt.net>
PIPE_BARRIER_UPDATE is defined as:
PIPE_BARRIER_UPDATE_BUFFER | PIPE_BARRIER_UPDATE_TEXTURE
Which means we were flushing for any flags other than these two, but
this was intended to only flush for ssbos and images.
Actually, the driver automatically flushes jobs as we need, including
writes/reads involving SSBOs and images, so we don't really need to
flush anything when the program emits a barrier. However, this may
lead to excessive flushing in some cases, so we will soon change this
to avoid atutomatic flushing of the current job for SSBOs and images,
meaning that we will rely on the application to emit correct memory
barriers for these that we should make sure to process here.
Reviewed-by: Eric Anholt <eric@anholt.net>
If the current draw call includes SSBOs, then we must flush any jobs
that are writing to the same SSBOs (so that our SSBOs reads are correct),
as well as jobs reading from the same SSBO (so that our SSBO writes don't
stomp previous SSBO reads).
The exact same logic applies to shader images. In this case we were already
flushing previous writes, but we should also flush previous reads.
Note that We don't need to call v3d_flush_jobs_reading_resource() and
v3d_flush_jobs_writing_resource() separately though, since flushing
jobs that read a resource also flushes those writing to it.
Suggested-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Eric Anholt <eric@anholt.net>
The `device` can be set earlier either by a command line or a by
intercepting an ioctl call to get the I915_PARAM_CHIPSET_ID done by
the application early. In both cases `aub_file` and `devinfo` would
not be initialized.
Fix by splitting the conditions
- `device == 0`: use the FD to get both device and devinfo.
- Or `devinfo.gen == 0`: use `device` to initialize it.
And separatedly, initialize aub_file the first time it is needed.
Fixes: d594d2a052 ("intel/tools: use device info initializer")
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
If the pixel pipes have a different number of subslices, emit a slice
hashing table that will ensure proper workload distribution.
v2: Set Mask field to 0xffff for workaround (Ken).
If the pixel pipes have a different number of subslices, emit a slice
hashing table that will ensure proper workload distribution.
v2: Don't need to set the mask - it's mbo (Ken).
If the pixel pipes have a different number of subslices, emit a slice
hashing table that will ensure proper workload distribution.
v2: Don't need to set the mask - it's mbo (Ken).
v3: Don't keep a reference to the resource used for emitting the table
(Ken).
Add these fields and the 3DSTATE_SLICE_TABLE_STATE_POINTERS instruction
so we can properly configure the slice and subslice hashing on ICL+
v2: Make 'Mask' field a mbo (Ken).
We don't need it for state setup but it's a useful statistic we want to
pass on to developers.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This commit is all annoying plumbing work which just adds support for a
new brw_compile_stats struct. This struct provides a binary driver
readable form of the same statistics we dump out to stderr when we
INTEL_DEBUG is set with a shader stage.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
While NIR's lower_imul64() solves the case of 64 bit integer multiplications
generated early, we don't have a way to lower such instructions when they are
generated by our own backend, such as the scan/reduce intrinsics. We'll need
this soon, so implement it now.
An easy way to test this is to simply disable nir_lower_imul64 to let
those operations reach the backend.
v2:
- Fix Q/UQ copy/paste errors (Caio).
- Transform an 'if' into 'else if' (Caio).
- Add an extra comment to clarify the need for 64b = 32b * 32b
(Caio).
- Make private functions private (Caio).
v3:
- Remove ambiguity with 'b' and 'd' variables (Caio).
- Allocate potentially less regs for the dwords (Caio).
Cc: Jason Ekstrand <jason.ekstrand@intel.com>
Cc: Matt Turner <matt.turner@intel.com>
Cc: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Invert the logic of how progress is handled: remove the continue
statements and mark progress inside the places where it actually
happens.
We're going to add a new lowering that also looks for BRW_OPCODE_MUL,
so inverting the logic here makes the resulting code much easier to
follow.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Don't instantiate a builder for each instruction during
lower_integer_multiplication(). Instantiate one only when needed.
On the other hand, these unneeded builders don't seem to cost much to
init, so I don't expect any significant difference in performance:
this is mostly about code organization.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>