Commit Graph

106876 Commits

Author SHA1 Message Date
Dylan Baker 9e989b860a bin/meson-cmd-extract: Also handle cross and native files
Native file support in command line serialization isn't present in meson
0.49, but will be for 0.49.1 and 0.50

Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
2019-01-18 09:37:01 -08:00
Jason Ekstrand b54df1b6df anv: Re-sort the extensions list
I like to keep things in good order so that you can find them.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-01-18 10:32:23 -06:00
Jason Ekstrand eb32dad07c intel/fs: Don't touch accumulator destination while applying regioning alignment rule
In some shaders, you can end up with a stride in the source of a
SHADER_OPCODE_MULH.  One way this can happen is if the MULH is acting on
the top bits of a 64-bit value due to 64-bit integer lowering.  In this
case, the compiler will produce something like this:

mul(8)    acc0<1>UD   g5<8,4,2>UD   0x0004UW      { align1 1Q };
mach(8)   g6<1>UD     g5<8,4,2>UD   0x00000004UD  { align1 1Q AccWrEnable };

The new region fixup pass looks at the MUL and sees a strided source and
unstrided destination and determines that the sequence is illegal.  It
then attempts to fix the illegal stride by replacing the destination of
the MUL with a temporary and emitting a MOV into the accumulator:

mul(8)    g9<2>UD     g5<8,4,2>UD   0x0004UW      { align1 1Q };
mov(8)    acc0<1>UD   g9<8,4,2>UD                 { align1 1Q };
mach(8)   g6<1>UD     g5<8,4,2>UD   0x00000004UD  { align1 1Q AccWrEnable };

Unfortunately, this new sequence isn't correct because MOV accesses the
accumulator with a different precision to MUL and, instead of filling
the bottom 32 bits with the source and zeroing the top 32 bits, it
leaves the top 32 (or maybe 31) bits alone and full of garbage.  When
the MACH comes along and tries to complete the multiplication, the
result is correct in the bottom 32 bits (which we throw away) and
garbage in the top 32 bits which are actually returned by MACH.

This commit does two things:  First, it adds an assert to ensure that we
don't try to rewrite accumulator destinations of MUL instructions so we
can avoid this precision issue.  Second, it modifies
required_dst_byte_stride to require a tightly packed stride so that we
fix up the sources instead and the actual code which gets emitted is
this:

mov(8)    g9<1>UD     g5<8,4,2>UD                 { align1 1Q };
mul(8)    acc0<1>UD   g9<8,8,1>UD   0x0004UW      { align1 1Q };
mach(8)   g6<1>UD     g5<8,4,2>UD   0x00000004UD  { align1 1Q AccWrEnable };

Fixes: efa4e4bc5f "intel/fs: Introduce regioning lowering pass"
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
2019-01-18 10:18:52 -06:00
Jason Ekstrand 0a7ac6d543 intel/eu: Stop overriding exec sizes in send_indirect_message
For a long time, we based exec sizes on destination register widths.
We've not been doing that since 1ca3a94427 but a few remnants
accidentally remained.

Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
2019-01-18 10:18:52 -06:00
Samuel Pitoiset f682ed11c3 radv: initialize the per-queue descriptor BO only once
Totally useless to write the descriptors inside the loop.

Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
2019-01-18 13:26:32 +01:00
Samuel Pitoiset 72d9745a40 radv: do not write unused descriptors to the per-queue BO
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
2019-01-18 13:26:30 +01:00
Samuel Pitoiset 8c164ea8f5 radv: reduce size of the per-queue descriptor BO
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
2019-01-18 13:26:28 +01:00
Samuel Pitoiset 83cc87ead4 radv: drop unused code related to 16 sample locations
The driver only supports up to 8 sample locations.

Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
2019-01-18 13:26:24 +01:00
Karol Herbst 80dae7022e gm107/ir: disable TEXS for tex with derivAll set
fixes deqp tests:
dEQP-GLES3.functional.shaders.texture_functions.texturegrad.samplercube_fixed_vertex
dEQP-GLES3.functional.shaders.texture_functions.texturegrad.samplercube_float_vertex
dEQP-GLES3.functional.shaders.texture_functions.texturegrad.isamplercube_vertex
dEQP-GLES3.functional.shaders.texture_functions.texturegrad.usamplercube_vertex
dEQP-GLES3.functional.shaders.texture_functions.texturegrad.sampler3d_fixed_vertex
dEQP-GLES3.functional.shaders.texture_functions.texturegrad.sampler3d_float_vertex
dEQP-GLES3.functional.shaders.texture_functions.texturegrad.isampler3d_vertex
dEQP-GLES3.functional.shaders.texture_functions.texturegrad.usampler3d_vertex
dEQP-GLES3.functional.shaders.texture_functions.texturegrad.sampler2dshadow_vertex
dEQP-GLES3.functional.shaders.texture_functions.textureprojgrad.sampler3d_fixed_vertex
dEQP-GLES3.functional.shaders.texture_functions.textureprojgrad.sampler3d_float_vertex
dEQP-GLES3.functional.shaders.texture_functions.textureprojgrad.isampler3d_vertex
dEQP-GLES3.functional.shaders.texture_functions.textureprojgrad.usampler3d_vertex
dEQP-GLES3.functional.shaders.texture_functions.textureprojgrad.sampler2dshadow_vertex

Fixes: f821e80213
       "gm107/ir: use scalar tex instructions where possible"
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
2019-01-18 03:27:51 +01:00
Karol Herbst 30b5c9eda2 nv50/ir: disable tryCollapseChainedMULs in ConstantFolding for precise instructions
fixes dEQP-GLES2.functional.shaders.invariance.mediump.loop_3

CC: <mesa-stable@lists.freedesktop.org>
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
2019-01-18 02:03:30 +01:00
Bas Nieuwenhuizen 8424cd8fbd nir: Account for atomics in copy propagation.
Otherwise writes get propagated across atomics if no barrier is
used. Without barrier writes should still be visible in the same
invocation, so an atomic has to be considered a write.

CC: <mesa-stable@lists.freedesktop.org>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Fixes: b3c6146925 "nir: Copy propagation between blocks"
Fixes: 62332d139c "nir: Add a local variable-based copy propagation pass"
2019-01-18 00:55:35 +01:00
Rafael Antognolli 927ba12b53 anv/tests: Adding test for the state_pool padding.
Add a test that checks that we can use the extra space allocated for
padding while allocating larger anv_states.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:26 -08:00
Rafael Antognolli 731c4adcf9 anv/allocator: Add support for non-userptr.
If softpin is supported, create new BOs for the required size and add the
respective BO maps. The other main change of this commit is that
anv_block_pool_map() now returns the map for the BO that the given
offset is part of. So there's no block_pool->map access anymore (when
softpin is used.

v3:
 - set fd to -1 on softpin case (Jason)

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:24 -08:00
Rafael Antognolli 643248b66a anv: Remove state flush.
We have all the state buffers snooped, so we don't need to clflush
everything anymore.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:22 -08:00
Rafael Antognolli 5d61c74f3d anv/allocator: Enable snooping on block pool and anv_bo_pool BOs.
We are not going to use userptr for anv block pool BOs anymore. However,
so far we have been relying on the fact that userptr BOs are snooped on
non-llc platforms. Let's make sure that the block pool BOs are still
snooped, and we can also remove the clflush'ing that we do on all state
buffers.

And since we plan to remove the flushes, set the anv_bo_pool BOs to
cached (snooped on non-LLC platforms) too. For LLC platforms, they are
all cached by default, so this becomes a no-op.

v5:
 - Add snooping to anv_bo_pool BOs too (Jason).
 - Remove anv_gem_set_domain.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:20 -08:00
Rafael Antognolli dfc9ab2ccd anv/allocator: Add padding information.
It's possible that we still have some space left in the block pool, but
we try to allocate a state larger than that state. This means such state
would start somewhere within the range of the old block_pool, and end
after that range, within the range of the new size.

That's fine when we use userptr, since the memory in the block pool is
CPU mapped continuously. However, by the end of this series, we will
have the block_pool split into different BOs, with different CPU
mapping ranges that are not necessarily continuous. So we must avoid
such case of a given state being part of two different BOs in the block
pool.

This commit solves the issue by detecting that we are growing the
block_pool even though we are not at the end of the range. If that
happens, we don't use the space left at the end of the old size, and
consider it as "padding" that can't be used in the allocation. We update
the size requested from the block pool to take the padding into account,
and return the offset after the padding, which happens to be at the
start of the new address range.

Additionally, we return the amount of padding we used, so the caller
knows that this happens and can return that padding back into a list of
free states, that can be reused later. This way we hopefully don't waste
any space, but also avoid having a state split between two different
BOs.

v3:
 - Calculate offset + padding at anv_block_pool_alloc_new (Jason).
v4:
 - Remove extra "leftover".

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:19 -08:00
Rafael Antognolli 7ed0898a8d anv/allocator: Rework chunk return to the state pool.
This commit tries to rework the code that split and returns chunks back
to the state pool, while still keeping the same logic.

The original code would get a chunk larger than we need and split it
into pool->block_size. Then it would return all but the first one, and
would split that first one into alloc_size chunks. Then it would keep
the first one (for the allocation), and return the others back to the
pool.

The new anv_state_pool_return_chunk() function will take a chunk (with
the alloc_size part removed), and a small_size hint. It then splits that
chunk into pool->block_size'd chunks, and if there's some space still
left, split that into small_size chunks. small_size in this case is the
same size as alloc_size.

The idea is to keep the same logic, but make it in a way we can reuse it
to return other chunks to the pool when we are growing the buffer.

v2:
 - Include Jason's suggestions to the algorithm that returns chunks.
 - Update comments.

v3:
 - Disallow returning 0 blocks (Jason).
 - fix min_size in the loop (Jason).
 - remove temporary variables (Jason)
v4:
 - return_chunk() should never return blocks larger than
 pool->block_size.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:17 -08:00
Rafael Antognolli 6a1f4c96cc anv: Remove some asserts.
They won't be true anymore once we add support for multiple BOs with
non-userptr.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:14 -08:00
Rafael Antognolli f39dad7e4e anv: Validate the list of BOs from the block pool.
We now have multiple BOs in the block pool, but sometimes we still
reference only the first one in some instructions, and use relative
offsets in others. So we must be sure to add all the BOs from the block
pool to the validation list when submitting commands.

v2:
   - Don't add block pool BOs to the dependency list right before
   execbuf (Jason)
   - Call anv_execbuf_add_bo() to each BO in the block pools (Jason)
   - Use anv_execbuf_add_bo_set() to add surface state dependencies to
   execbuf.

v3:
   - Add comment to the non-softpin case (Jason).

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:10 -08:00
Rafael Antognolli 11a5d4620b anv: Split code to add BO dependencies to execbuf.
This part of the anv_execbuf_add_bo() code is totally independent of the
BO being added. Let's split it out, so we can reuse it later.

v3: rename to anv_execbuf_add_bo_set (Jason).

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:08 -08:00
Rafael Antognolli f874604f45 anv/allocator: Add support for a list of BOs in block pool.
So far we use only one BO (the last one created) in the block pool. When
we switch to not use the userptr API, we will need multiple BOs. So add
code now to store multiple BOs in the block pool.

This has several implications, the main one being that we can't use
pool->map as before. For that reason we update the getter to find which
BO a given offset is part of, and return the respective map.

v3:
 - Simplify anv_block_pool_map (Jason).
 - Use fixed size array for anv_bo's (Jason)
v4:
 - Respect the order (item, container) in anv_block_pool_foreach_bo
 (Jason).

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:04 -08:00
Rafael Antognolli e3dc56d731 anv: Update usage of block_pool->bo.
Change block_pool->bo to be a pointer, and update its usage everywhere.
This makes it simpler to switch it later to a list of BOs.

v3:
 - Use a static "bos" field in the struct, instead of malloc'ing it.
 This will be later changed to a fixed length array of BOs.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:02 -08:00
Rafael Antognolli fc3f588320 anv/allocator: Remove pool->map.
After switching to using anv_state_table, there are very few places left
still using pool->map directly. We want to avoid that because it won't
be always the right map once we split it into multiple BOs.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:08:00 -08:00
Rafael Antognolli 54e21e145e anv/allocator: Rename anv_free_list2 to anv_free_list.
Now that we removed the original anv_free_list, we can now use its name.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:07:58 -08:00
Rafael Antognolli 234c9d8a40 anv/allocator: Remove anv_free_list.
The next commit already renames anv_free_list2 -> anv_free_list since
the old one is gone.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:07:56 -08:00
Rafael Antognolli e2179aceaf anv/allocator: Use anv_state_table on back_alloc too.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:07:52 -08:00
Rafael Antognolli d18267fb48 anv/allocator: Use anv_state_table on anv_state_pool_alloc.
Use anv_state_pool_return_blocks() to return blocks to the pool, instead
of manually pushing them.

v3:
 - return blocks from the end of the chunk (Jason).

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:07:50 -08:00
Rafael Antognolli 6a1dcfe73d anv/allocator: Add helper to push states back to the state table.
The use of anv_state_table_add() combined with anv_state_table_push(),
specially when adding a bunch of states to the table, is very verbose.
So we add this helper that makes things easier to digest.

We also already add the anv_state_table member in this commit, so things
can compile properly, even though it's not used.

v2: assert that the states are always aligned to their size (Jason)
v3: Add "table" member to anv_state_pool in this commit.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:07:47 -08:00
Rafael Antognolli e8b6e0a5ba anv/allocator: Add getter for anv_block_pool.
We will need the anv_block_pool_map to find the map relative to some BO
that is not at the start of the block pool.

v2: just return a pointer instead of a struct (Jason)
v4: Update comment (Jason)

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:07:43 -08:00
Rafael Antognolli 6a2d5ae305 anv/allocator: Add anv_state_table.
Add a structure to hold anv_states. This table will initially be used to
recycle anv_states, instead of relying on a linked list implemented in
GPU memory. Later it could be used so that all anv_states just point to
the content of this struct, instead of making copies of anv_states
everywhere.

One has to call anv_state_table_add(), which returns an index for the
state in the table, and then get a pointer to such index, and finally
fill in the rest of the struct.

TODO:
   1) There's a lot of common code between this table backing store
   memory and the anv_block_pool buffer, due to how we grow it. I think
   it's possible to refactory this and reuse code on both places.

   2) Add unit tests.

v3:
 - Rename state table memfd (Jason)
 - Return VK_ERROR_OUT_OF_HOST_MEMORY on more places (Jason)
 - anv_state_table_grow returns VkResult (Jason)
 - Rename variables to be more informative (Jason)
 - Return errors on state table grow.
 - Rename anv_state_table_push/pop to anv_free_list_push2/pop2
   This will be renamed again to remove the trailing "2" later.

v4:
 - Remove exit(-1) from anv_state_table (Jason).
 - Use uint32_t "next" field in anv_free_entry (Jason).

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:07:34 -08:00
Rafael Antognolli 27478ce00e anv/tests: Fix block_pool_no_free test.
There were 2 problems with this test.

First it was comparing highest, which was -1, with an uint32_t. So the
current value would never be higher than that, and the assert would
always be false. It just never reached this point because of the next
problem.

It was always looking for the highest value of each thread and storing
it in thread_max. So a test case like this wouldn't work:

[Thread]: [Blocks]
   [0]: [0, 32, 64, 96]
   [1]: [128, 160, 192, 224]
   [2]: [256, 288, 320, 352]

Not only that would skip values and iterate only over thread number 2,
instead of walking through all of them, but thread_max was also
initialized to -1. And then compared to unsigned blocks[i][next[i].

We fix that by getting the smallest value of each thread, and checking
if it is lower than thread_min, which is initialized to INT32_MAX. And
then we end up walking through all the blocks of all threads. We also
change "blocks" to be int32_t instead of uint32_t, since in some places
(alloc_blocks) it was already referenced as int32_t, and that fixes the
comparison to -1.

v2:
 - keep highest initialized to -1, and change blocks to be int32_t.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 15:05:58 -08:00
Lionel Landwerlin 4149d41f2e anv: fix invalid binding table index computation
The ++ operator strikes again.

Fixes: f92c5bc8f3 ("anv/device: fix maximum number of images supported")
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-17 11:49:10 -08:00
Eric Engestrom c4c5c90255 docs: explain how to see what meson options exist
Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
2019-01-17 17:05:41 +00:00
Emil Velikov 406623f5b1 docs: update calendar, add news item and link release notes for 18.3.2
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
2019-01-17 11:37:41 +00:00
Emil Velikov 9d58641bf2 docs: add sha256 checksums for 18.3.2
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
(cherry picked from commit 8320a07221a342ea56528a1839ce5b33c8226b36)
2019-01-17 11:32:20 +00:00
Emil Velikov 2dad014496 docs: add release notes for 18.3.2
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
(cherry picked from commit 95a3b709c0d4618d900f8b8bed429ee4f786fab2)
2019-01-17 11:32:19 +00:00
Iago Toral Quiroga f92c5bc8f3 anv/device: fix maximum number of images supported
We had defined MAX_IMAGES as 8, which we used to size the array for
image push constant data. The comment there stated that this was for
gen8, but anv_nir_apply_pipeline_layout runs for all gens and writes
that array, asserting that we don't exceed that number of images,
which imposes a limit of MAX_IMAGES on all gens.

Furthermore, despite this, we are exposing up to 64 images per shader
stage on all gens, gen8 included.

This patch lowers the number of images we expose in gen8 to 8 and
keeps 64 images for gen9+ while making sure that only pre-SKL gens
use push constant space to handle images.

v2:
 - <= instead of < in the assert (Eric, Lionel)
 - Change the way the assertion is written (Eric)

v3:
 - Revert the way the assertion is written to the form it had in v1,
   the version in v2 was not equivalent and was incorrect. (Lionel)

v4:
 - gen9+ doesn't need push constants for images at all (Jason)

Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> (v3)
2019-01-17 07:59:00 +01:00
Tapani Pälli a311aa631d anv: do not advertise AHW support if extension not enabled
Fixes following failing vk-gl-cts cases on Linux desktop:

   dEQP-VK.api.external.memory.android_hardware_buffer.suballocated.buffer.info
   dEQP-VK.api.external.memory.android_hardware_buffer.suballocated.image.info
   dEQP-VK.api.external.memory.android_hardware_buffer.dedicated.image.info
   dEQP-VK.api.external.memory.android_hardware_buffer.dedicated.buffer.info

Fixes: 517103abf1 "anv/android: add ahardwarebuffer external memory properties"
Reported-by: Juan A. Suarez <jasuarez@igalia.com>
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Juan A. Suarez <jasuarez@igalia.com>
2019-01-17 07:22:02 +02:00
Eric Anholt 99ef66c325 vc4: Don't leak the GPU fd for renderonly usage.
Noticed while debugging V3D -- the ro->gpu_fd was freshly opened in ro
setup, and it needs to stay open until screen close (since it may be used
by renderonly) and should be the same one used by the vc4 screen.

Fixes: 7029ec05e2 ("gallium: Add renderonly-based support for pl111+vc4.")
2019-01-16 16:28:41 -08:00
Eric Anholt 0605726776 v3d: Don't leak the GPU fd for renderonly usage.
The CTS was running out of fds, because of the ro->gpu_fd never being
closed.  ro->gpu_fd should match the screen (in case the caller of
v3d_drm_screen_create_renderonly() has a scanout_for_resource() that uses
gpu_fd) and the screen is expected to close its fd at the end, fixing the
resource leak.

Fixes: e113b21cb7 ("v3d: Add renderonly support.")
2019-01-16 16:28:41 -08:00
Eric Anholt 59527a36e9 v3d: Restructure RO allocations using resource_from_handle.
I had bugs in the old path where I was laying out as tiled (so we'd render
tiled) but then only allocating space in the shared object for linear
rendering.  The resource_from_handle makes it so the same layout choices
are made in both the import and export scanout cases.  Also, fixes a leak
of the fd that was tripping up the CTS.

Now that we're checking PIPE_BIND_SHARED to choose to use RO, the
DRM_FORMAT_MOD_LINEAR check wasn't needed any more.

Fixes visual corruption and MMU faults in X in renderonly mode.

Fixes: bd09bb1629 ("v3d: SHARED but not necessarily SCANOUT buffers on RO must be linear.")
2019-01-16 16:28:41 -08:00
Eric Anholt d70eb2302b v3d: If the modifier is not known on BO import, default to linear for RO.
Part of fixing DRI3 rendering with RO on X11.

Fixes: e113b21cb7 ("v3d: Add renderonly support.")
2019-01-16 16:28:41 -08:00
Timothy Arceri cb527d2c4c ac/nir_to_llvm: add support for structs to get_sampler_desc()
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2019-01-17 10:35:36 +11:00
Timothy Arceri b12316cc92 ac/nir_to_llvm: fix regression in bindless support
This wasn't ported over when deref support was implemented.

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2019-01-17 10:35:36 +11:00
Timothy Arceri e106e0f2dd radeonsi/nir: get correct type for images inside structs
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2019-01-17 10:35:36 +11:00
Timothy Arceri 292887ac0d ac/nir_to_llvm: fix type handling in image code
The current code only strips off arrays and cannot find the type
for images that are struct members.

Instead of trying to get the image type from the variable, we just
get it directly from the deref instruction.

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2019-01-17 10:35:36 +11:00
Rhys Perry 8a52e4cc4f radv: use dithered alpha-to-coverage
This matches the behaviour of AMDVLK and hides banding.
It is also seems to be allowed by the Vulkan spec.

Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
2019-01-16 20:49:23 +00:00
Alok Hota 187a6506a3 swr/rast: Store cached files in multiple subdirs
This improves cache filesystem performance, especially during CI tests
Also updated jitcache magic number due to codegen parameter changes
Removed 2 `if constexpr` to prevent C++17 requirement
2019-01-16 13:53:30 -06:00
Alok Hota bb98be61f4 swr/rast: New execution engine per JIT
Fixes relocation errors with LLVM 7.0.0
2019-01-16 13:53:30 -06:00
Alok Hota b135db5d58 swr/rast: Scope MEM_CLIENT enum for mem usages
Avoids confusion with other defaulted integer parameters

- fixed some unspecified usages
- removed unnecessary includes
- removed unecessary protected access specifier in buckets framework
2019-01-16 13:53:30 -06:00