In vbo_draw_transform_feedback we currently look at
exec->array.inputs to determine if all varying
vertex attributes reside in vbos. But the vbo_bind_arrays
call only happens past the vbo_all_varyings_in_vbos
query. Thus we may work on a stale set of client arrays.
Using the current VAOs content for this query feels much
more logical to me.
Additionally with this change mesa makes more use of the
information already tracked in the VAO instead of looping
across VERT_ATTRIB_MAX vertex arrays.
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Reviewed-by: Eric Anholt <eric@anholt.net>
Implement the equivalent of vbo_all_varyings_in_vbos for
vertex array objects.
v2: Update comment.
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Reviewed-by: Eric Anholt <eric@anholt.net>
When a vertex buffer object gets deleted, it is unbound
at the VAO. To do this use _mesa_bind_vertex_buffer instead
of plain unreferencing the buffer object. This keeps the VAOs
internal state consistent. In this case it showed up with
gl_vertex_array_object::VertexAttribBufferMask getting out of
sync.
Signed-off-by: Mathias Fröhlich <Mathias.Froehlich@web.de>
Reviewed-by: Eric Anholt <eric@anholt.net>
This was added with ARB_enhanced_layouts.
V2: Add an extra format specifier for the new qualifier.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
See commit b0629e6894, where we discovered
that the SOL stage's "Rendering Disable" feature is a lot faster at
throwing away all geometry than the clipper's "reject all" mode.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
CovID: 401540
Signed-off-by: Eric Engestrom <eric@engestrom.ch>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Edward O'Callaghan <funfunctor@folklore1984.net>
Previously, the bitshift would be performed on a simple int (32 bits on
most systems), overflow, and then be cast to 64 bits.
CovID: 1362461
Signed-off-by: Eric Engestrom <eric@engestrom.ch>
Signed-off-by: Rob Clark <robdclark@gmail.com>
We need to emit RB_FRAME_BUFFER_DIMENSION once per batch.. tracking this
in fd_context is wrong when the gmem code executes asynchronously from
the flush_queue worker. But in fact we don't really need to track it at
all. We cannot assume previous value at the beginning of the batch
(because of other processes potentially using the GPU), so just drop the
tracking and emit it in _tile_init().
Signed-off-by: Rob Clark <robdclark@gmail.com>
This is also used in gmem code, which executes from the "bottom half"
(ie. from the flush_queue worker thread), so it cannot be in fd_context.
Signed-off-by: Rob Clark <robdclark@gmail.com>
They weren't really used, and it gets somewhat more complicated to deal
with if batches are flushed asynchronously (on another thread). So just
drop them, and move _query_set_state(NULL) call into batch (so it is not
happening on background thread).
Signed-off-by: Rob Clark <robdclark@gmail.com>
With the state accessed from GMEM+submit factored out of fd_context and
into fd_batch, now it is possible to punt this off to a helper thread.
And more importantly, since there are cases where one context might
force the batch-cache to flush another context's batches (ie. when there
are too many in-flight batches), using a per-context helper thread keeps
various different flushes for a given context serialized.
TODO as with batch-cache, there are a few places where we'll need a
mutex to protect critical sections, which is completely missing at the
moment.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Add a bit of extra book-keeping about blits and back-blits (from
resource shadowing). If the app uploads all mipmap levels, as opposed
to uploading the first level and then glGenerateMipmap(), we can discard
the back-blit (as opposed to being naive and shadowing the resource for
each mipmap level). Also, after a normal blit, we might as well flush
the batch immediately, since there is not likely to be further rendering
to the surface.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Push query state down to batch, and use the resource tracking to figure
out which batch(es) need to be flushed to get the query result.
This means we actually need to allocate the prsc up front, before we
know the size. So we have to add a special way to allocate an un-
backed resource, and then later allocate the backing storage.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Switch to using a pipe_resource (rather than an fd_bo directly) for hw
query result buffers. This is first step towards making queries work
properly with reordered batches, since we'll need the additional
dependency tracking to know which batches to flush.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Basically, to "DCE" blits triggered by resource shadowing, in cases
where the levels are immediately completely overwritten. For example,
mid-frame texture upload to level zero triggers shadowing and back-blits
to the remaining levels, which are immediately overwritten by
glGenerateMipmap().
Signed-off-by: Rob Clark <robdclark@gmail.com>
To make batch re-ordering useful, we need to be able to create shadow
resources to avoid a flush/stall in transfer_map(). For example,
uploading new texture contents or updating a UBO mid-batch. In these
cases, we want to clone the buffer, and update the new buffer, leaving
the old buffer (whose reference is held by cmdstream) as a shadow.
This is done by blitting the remaining other levels (and whatever part
of current level that is not discarded) from the old/shadow buffer to
the new one.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Note that I originally also had a entry-point that would construct a key
and do lookup from a pipe_surface. I ended up not needing that (yet?)
but it is easy-enough to re-introduce later if we need it for the blit
path.
For now, not enabled by default, but can be enabled (on a3xx/a4xx) with
FD_MESA_DEBUG=reorder.
Signed-off-by: Rob Clark <robdclark@gmail.com>
To flush batches out of order, the gmem code needs to not depend on
state from fd_context (since that may apply to a more recent batch).
So this all moves into batch.
The one exception is the gmem/pipe/tile state itself. But this is
only used from gmem code (and batches are flushed serially). The
alternative would be having to re-calculate GMEM layout on every
batch, even if the dimensions of the render targets are the same.
Note: This opens up the possibility of pushing gmem/submit into a
helper thread.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Introduce the batch object, to track a batch/submit's worth of
ringbuffers and other bookkeeping. In this first step, just move
the ringbuffers into batch, since that is mostly uninteresting
churn.
For now there is just a single batch at a time. Note that one
outcome of this change is that rb's are allocated/freed on each
use. But the expectation is that the bo pool in libdrm_freedreno
will save us the GEM bo alloc/free which was the initial reason
to implement a rb pool in gallium.
The purpose of the batch is to eventually facilitate out-of-order
rendering, with batches associated to framebuffer state, and
tracking the dependencies on other batches.
Signed-off-by: Rob Clark <robdclark@gmail.com>
This bug seems to have always been there. Applications changing shaders
but not textures between draw calls would have gotten undefined behavior.
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
This just needs to be done by st_validate_state.
v2: add "shaders_may_be_dirty" flags for not skipping st_validate_state
on _NEW_PROGRAM to detect real shader changes
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
The goal is to do this in st_validate_state:
while (dirty)
atoms[u_bit_scan(&dirty)]->update(st);
That implies that atoms can't specify which flags they consume.
There is exactly one ST_NEW_* flag for each atom. (58 flags in total)
There are macros that combine multiple flags into one for easier use.
All _NEW_* flags are translated into ST_NEW_* flags in st_invalidate_state.
st/mesa doesn't keep the _NEW_* flags after that.
torcs is 2% faster between the previous patch and the end of this series.
v2: - add st_atom_list.h to Makefile.sources
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
The pass I introduced in commit a2dc11a781
was entirely broken. A missing "break" made the load_interpolated_input
case always fall through to "default" and hit a "continue", making it
not actually move any load_interpolated_input intrinsics at all.
It would only move the simple load_barycentric_* intrinsics, which
don't emit any code anyway, making it basically useless.
The initial version I sent of the pass worked, but I apparently
failed to verify that the simplified version in v2 actually worked.
With the obvious fix applied (so we actually tried to move
load_interpolated_input intrinsics), I discovered a second bug: we
weren't moving the offset SSA def to the top, breaking SSA validation.
The new version of the pass actually moves load_interpolated_input
intrinsics and all their dependencies, as intended.
Papers over GPU hangs on Ivybridge and Baytrail caused by the
recent NIR FS input rework by restoring the old behavior.
(I'm not honestly sure why they hang with PLN not at the top.)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97083
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Valgrind detected that variable ir_copy_propagation_visitor::killed_all
is uninitialized.
Signed-off-by: Jan Ziak (http://atom-symbol.net) <0xe2.0x9a.0x9b@gmail.com>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Jan Ziak (http://atom-symbol.net) <0xe2.0x9a.0x9b@gmail.com>
Reviewed-by: Eric Engestrom <eric.engestrom@imgtec.com>
Reviewed-and-Tested-by: Michel Dänzer <michel.daenzer@amd.com>
Exported dmabufs can get imported by the same process, but the handle was
not getting added to the hash table on export. Add the handle to the hash
table on export.
Signed-off-by: Rob Herring <robh@kernel.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>