Commit Graph

87033 Commits

Author SHA1 Message Date
Kenneth Graunke 663b2e9a92 nir: Add a "compact array" flag and IO lowering code.
Certain built-in arrays, such as gl_ClipDistance[], gl_CullDistance[],
gl_TessLevelInner[], and gl_TessLevelOuter[] are specified as scalar
arrays.  Normal scalar arrays are sparse - each array element usually
occupies a whole vec4 slot.  However, most hardware assumes these
built-in arrays are tightly packed.

The new var->data.compact flag indicates that a scalar array should
be tightly packed, so a float[4] array would take up a single vec4
slot, and a float[8] array would take up two slots.

They are still arrays, not vec4s, however.  nir_lower_io will generate
intrinsics using ARB_enhanced_layouts style component qualifiers.

v2: Add nir_validate code to enforce type restrictions.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2016-11-22 00:29:23 -08:00
Dave Airlie f395e3445d radv: add support for shader stats dump
I've started working on a shader-db alike for Vulkan,
it's based on vktrace and it records pipelines, this
adds support to dump the shader stats exactly like
radeonsi does, so I can reuse the shader-db scripts it
uses.

Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2016-11-22 07:20:17 +00:00
Dave Airlie 220912e214 radv: fix sample id loading
The sample id is packed into bits 8-12, so adjust
things properly.

Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2016-11-22 17:15:57 +10:00
Dave Airlie 3c6151ccaf radv/ac: add implementation of load_sample_pos intrinsic.
This fixes a bunch of crashes in CTS tests looking for this.

Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2016-11-22 17:15:54 +10:00
Dave Airlie 5697cfb7ec radv/ac: cleanup ddxy emission
This cleans up the ddxy emission along the same lines as
radeonsi. It also means we don't use LDS on VI chips we
use the dspermute interface, it also removes some duplicated
code.

Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2016-11-22 17:15:43 +10:00
Dave Airlie fa57b77105 radv/meta: cleanup resolve vertex state emission
For the hw resolve there is no need to emit any sort
of texture coordinates, so drop them all in the meta path.

Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2016-11-22 17:15:37 +10:00
Bas Nieuwenhuizen 24427e31ef radv: Incorporate GPU family into cache UUID.
Invalidates the cache when someone switches cards.

Signed-off-by: Bas Nieuwenhuizen <basni@google.com>
2016-11-22 07:58:35 +01:00
Bas Nieuwenhuizen d94383970f radv: Use library mtime for cache UUID.
We want to also invalidate the cache when LLVM gets changed. As the
specific LLVM revision is not fixed at build time, we will need to
check at runtime. Computing a checksum for LLVM is going to be very
expensive, so just use the mtime.

Tested on my computer that the returned DSO for the LLVM symbol is
actually the LLVM DSO.

Signed-off-by: Bas Nieuwenhuizen <basni@google.com>
2016-11-22 07:58:35 +01:00
Bas Nieuwenhuizen 43ee4917ca radv: Store UUID in physical device.
No sense in repeatedly determining it. Also, it might be dependent
on the device as shaders get compiled differently for SI/CIK/VI etc.

Signed-off-by: Bas Nieuwenhuizen <basni@google.com>
2016-11-22 07:58:35 +01:00
Timothy Arceri 581bd1d12a glsl: fix NULL check
Fixes copy and paste error in 9d96d3803a
2016-11-22 14:40:26 +11:00
Ilia Mirkin 807bc6ea9e swr: calculate viewport width/height based on the scale
The former calculations were for min/max y. The width/height don't take
translate into account.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Tim Rowley <timothy.o.rowley@intel.com>
2016-11-21 21:11:26 -05:00
Ilia Mirkin c3dd5b2e3f swr: don't claim to allow setting layer/viewport from VS
This may ultimately be possible to support, but for now it's not hooked
up and the swr core only supports this output from GS.

This normally wouldn't matter, but we lie about supporting GL 3.2, and
also the blitter and st/mesa will make use of this functionality if
claimed.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Tim Rowley <timothy.o.rowley@intel.com>
2016-11-21 21:11:26 -05:00
Ilia Mirkin d48740568f swr: allocate all scratch space in one go for vertex buffers
Multiple buffers may reference client arrays. When this happens, we
might reach for scratch space multiple times, which could cause later
arrays to invalidate the pointers allocated for the earlier ones.

This fixes copyteximage 2D_ARRAY.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
2016-11-21 21:11:26 -05:00
Ilia Mirkin 16d42f2f3d swr: call swr_update_derived unconditionally when drawing/clearing
Currently a sequence like draw/map/draw/map will cause the second map to
not wait for the second draw. This is because the first map will clear
the resource business bit, and the second draw won't reset it since no
state has changed.

swr_update_derived does a tiny bit of extra work, including updating the
SWR_BACKEND_STATE as well as waiting for prending fences. If that's a
problem, we could call swr_update_resource_status directly from
draw/clear handlers.

Fixes clearbuffer-stencil, clearbuffer-depth, clearbuffer-depth-stencil,
and clearbuffer-display-lists.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
2016-11-21 21:11:26 -05:00
Ilia Mirkin ee0b6597a9 swr: [rasterizer memory] minify texture width before alignment
The minification should happen before alignment, not after. See similar
logic on ComputeLODOffsetY. The current logic requires unnecessarily
large textures when there's an initial NPOT size.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Tim Rowley <timothy.o.rowley@intel.com>
2016-11-21 21:11:26 -05:00
Ilia Mirkin c5a654786b swr: [rasterizer memory] minify original sizes for block formats
There's no guarantee that mip width/height will be a multiple of the
compressed block size. Doing a divide by the block size first yields
different results than GL expects, so we do the divide at the end.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Tim Rowley <timothy.o.rowley@intel.com>
2016-11-21 21:11:26 -05:00
Marek Olšák bf75ef3f92 radeonsi: remove all varyings for depth-only rendering or rasterization off
Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák ef6c84b301 radeonsi: eliminate VS outputs that aren't used by PS at runtime
A past commit added the ability to compile "optimized" shader variants
asynchronously (not stalling the app).

This commit builds upon that and adds what is basically a runtime shader
linker. If a VS output isn't used by the currently-bound PS, a new VS
compilation is started without that output. The new shader variant
is used when it's ready.

All apps using separate shader objects I've seen had unused VS outputs.

Eliminating unused/useless VS outputs also eliminates the corresponding
vertex attribute loads.

Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák 7e76f9a7a8 radeonsi: record information about all written and read varyings
It's just tgsi_shader_info with DEFAULT_VAL varyings removed.

Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák c7f3e5c647 radeonsi: make si_shader_io_get_unique_index stricter
Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák ed3190b3f3 radeonsi: don't export ClipVertex and ClipDistance[] if clipping is disabled
This is the first user of optimized monolithic shader variants.

Cull distances can't be disabled by states.

Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák d984a324bf radeonsi: add infrastr. for compiling optimized shader variants asynchronously
Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák d2a56985d7 radeonsi: don't set vs.epilog.export_prim_id if TES is bound
there is no VS epilog in this case

Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák fee71fec25 radeonsi: simplify checking for monolithic compilation
Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák e6aee45db4 radeonsi: print all flags in si_dump_shader_key
Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák 6d5c2a8b5c radeonsi: split the shader key into 3 logical parts
key->part.*: prolog and epilog flags only
key->as_{ls,es}: special flags
key->mono.*: flags for monolithic compilation only

Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák d4e9f409e9 radeonsi: fix culling if clip & cull distances are used at the same time
Fixed piglits:
- arb_cull_distance/clip-cull-3
- arb_cull_distance/clip-cull-4

Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák 9d8db805ef radeonsi: clean up si_emit_clip_regs
Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák e59389d738 radeonsi: assume that a VS without POSITION is LS
Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák 7dbf83af54 tgsi/scan: record if a shader writes the position output
Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák 8a2251911e tgsi/scan: use a big switch for scanning outputs
Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák bdd860e307 radeonsi: decrease the number of texture slots to 24
Company Of Heroes 2 needs only 24.

This saves 512 bytes of CE RAM per shader stage.

Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák fa476e0566 radeonsi: fast exit si_emit_derived_tess_state early
Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák 79a8e674ae winsys/amdgpu: set addrlib flag opt4Space
Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák 72d1669ed2 radeonsi: check for !is_linear in do_hardware_msaa_resolve
We don't want opt4Space here.

Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Marek Olšák 49fa4a4e60 gallium/radeon: add RADEON_SURF_OPTIMIZE_FOR_SPACE
FORCE_TILING should disable it. It has no effect now, but that may change
soon.

Tested-by: Edmondo Tommasina <edmondo.tommasina@gmail.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-11-21 21:44:35 +01:00
Mun Gwan-gyeong 44a3f2ee09 radeonsi: Add missing error-checking to si_create_compute_state (v2)
When the uploading of shader fails on si_shader_binary_upload(),
it returns -ENOMEM. We should handle si_shader_binary_upload() failure path
on si_create_compute_state().

CID 1394027

v2: Fixes from Edward O'Callaghan's review
 a) Update explicitly return value check with "si_shader_binary_upload() < 0"
 b) Update commit message.

Signed-off-by: Mun Gwan-gyeong <elongbug@gmail.com>
Reviewed-by: Edward O'Callaghan <funfunctor@folklore1984.net>
Signed-off-by: Marek Olšák <marek.olsak@amd.com>
2016-11-21 21:09:06 +01:00
Roland Scheidegger e442db8e98 draw: drop some overflow computations
It turns out that noone actually cares if the address computations overflow,
be it the stride mul or the offset adds.
Wrap around seems to be explicitly permitted even by some other API (which
is a _very_ surprising result, as these overflow computations were added just
for that and made some tests pass at that time - I suspect some later fixes
fixed the actual root cause...). So the requirements in that other api were
actually sane there all along after all...
Still need to make sure the computed buffer size needed is valid, of course.
This ditches the shiny new widening mul from these codepaths, ah well...

And now that I really understand this, change the fishy min limiting
indices to what it really should have done. Which is simply to prevent
fetching more values than valid for the last loop iteration. (This makes
the code path in the loop minimally more complex for the non-indexed case
as we have to skip the optimization combining two adds. I think it should
be safe to skip this actually there, but I don't care much about this
especially since skipping that optimization actually makes the code easier
to read elsewhere.)

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
2016-11-21 20:02:53 +01:00
Roland Scheidegger 2471aaa02f draw: simplify fetch some more
Don't keep the ofbit. This is just a minor simplification, just adjust
the buffer size so that there will always be an overflow if buffers aren't
valid to fetch from.
Also, get rid of control flow from the instanced path too. Not worried about
performance, but it's simpler and keeps the code more similar to ordinary
fetch.

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
2016-11-21 20:02:53 +01:00
Roland Scheidegger 4e1be31f01 draw: unify linear and elts draw jit functions
The code for elts and linear paths was nearly 100% identical by now - with
the elts path simply having some additional gather for the elements in the
main loop (with some additional small differences before the main loop).

Hence nuke the separate functions and decide this at jit shader execution
time (simply based on the presence of the elts pointer).

Some analysis shows that the generated vs jit functions seem to be just very
minimally more complex than the former elts functions, and almost none of the
additional complexity is in the main loop (basically just the branch logic
for the branch fetching the actual indices).
Compared to linear, the codesize of the function is of course a bit larger,
however the actual executed code in the main loop appears to be near 100%
identical (the additional code looking up indices is skipped as expected).

So, I would not expect a (meaningful) performance difference with the
generated code, neither with elts nor linear, this does however roughly
half the compilation time (the compiled shaders should also use only half
the memory of course).

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
2016-11-21 20:02:53 +01:00
Roland Scheidegger 8cf7edff7d draw: use same argument order for jit draw linear / elts functions
This is a bit simpler. Mostly to make it easier to unify the paths later...

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
2016-11-21 20:02:53 +01:00
Roland Scheidegger 78a997f728 draw: drop unnecessary index overflow handling from vsplit code
This was kind of strange, since it replaced indices which were only
overflowing due to bias with MAX_UINT. This would cause an overflow later
in the shader, except if stride was 0, however the vertex id would be
essentially random then (-1 + eltBias). No test cared about it, though.
So, drop this and just use ordinary int arithmetic wraparound as usual.
This is much simpler to understand and the results are "more correct" or
at least more consistent (vertex id as well as actual fetch results just
correspond to wrapped around arithmetic).
There's only one catch, it is now possible to hit the cache initialization
value also with ushort and ubyte elts path (this wouldn't be an issue if
we'd simply handle the eltBias itself later in the shader). Hence, we need
to make sure the cache logic doesn't think this element has already been
emitted when it has not (I believe some seriously bad things could happen
otherwise). So, borrow the logic which handled this from the uint case, but
not before fixing it up...

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
2016-11-21 20:02:53 +01:00
Roland Scheidegger 7a55c436c6 draw: simplify vsplit elts code a bit
vsplit_get_base_idx explicitly returned idx 0 and set the ofbit
in case of overflow. We'd then check the ofbit and use idx 0 instead of
looking it up. This was necessary because DRAW_GET_IDX used to return
DRAW_MAX_FETCH_IDX and not 0 in case of overflows.
However, this is all unnecessary, we can just let DRAW_GET_IDX return 0
in case of overflow. In fact before bbd1e60198
the code already did that, not sure why this particular bit was changed
(might have been one half of an attempt to get these indices to actual draw
shader execution - in fact I think this would make things less awkward, it
would require moving the eltBias handling to the shader as well).
Note there's other callers of DRAW_GET_IDX - those code paths however
explicitly do not handle index buffer overflows, therefore the overflow
value doesn't matter for them.

Also do some trivial simplification - for (unsigned) a + b, checking res < a
is sufficient for overflow detection, we don't need to check for res < b too
(similar for signed).

And an index buffer overflow check looked bogus - eltMax is the number of
elements in the index buffer, not the maximum element which can be fetched.
(Drop the start check against the idx buffer though, this is already covered
by end check and end < start).

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
2016-11-21 20:02:53 +01:00
George Kyriazis 9aae167e94 gallium: Add support for SWR compilation
Include swr library and include -DHAVE_SWR in the compile line.

v3: split to a separate commit

Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
2016-11-21 12:44:47 -06:00
George Kyriazis 5b4d1500dd gallium: swr: Added swr build for windows
v4: Add windows-specific gen_knobs.{cpp|h} changes
v5: remove aggresive squashing of gen_knobs.py to this commit; added
SConscript to EXTRA_DIST in Makefile.am

Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
2016-11-21 12:44:47 -06:00
George Kyriazis 9e4e1f5190 swr: Modify gen_knobs.{cpp|h} creation script
Modify gen_knobs.py so that each invocation creates a single generated
file.  This is more similar to how the other generators behave.

v5: remove Scoscript edits from this commit; moved to commit that first
adds SConscript

Acked-by: Emil Velikov <emil.velikov@collabora.com>
2016-11-21 12:44:47 -06:00
George Kyriazis 9085f1a9cc scons: Add swr compile option
To buils The SWR driver (currently optional, not compiled by default)

v3: add option as opposed to target

Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
2016-11-21 12:44:47 -06:00
George Kyriazis bc26e8d4a7 swr: Windows-related changes
- Handle dynamic library loading for windows
- Implement swap for gdi
- fix prototypes
- update include paths on configure-based build for swr_loader.cpp

v2: split to multiple patches
v3: split and reshuffle some more; renamed title
v4: move Makefile.am changes to other commit. Modify header files

Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
2016-11-21 12:44:46 -06:00
George Kyriazis 87bd28210f swr: renamed duplicate swr_create_screen()
There are 2 swr_create_screen() functions.  One in swr_loader.cpp, which
is used during driver init, and the other is hiding in swr_screen.cpp,
which ends up in the arch-specific .dll/.so.

Rename the second one to swr_create_screen_internal(), to avoid confusion
in header files.

Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
2016-11-21 12:44:46 -06:00
George Kyriazis 974d280e81 swr: Handle windows.h and NOMINMAX
Reorder header files so that we have a chance to defined NOMINMAX before
mesa include files include windows.h

v3: split from bigger patch

Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
2016-11-21 12:44:46 -06:00