The algorithms used by this pass, especially for division, are heavily
based on the work Ian Romanick did for the similar int64 lowering pass
in the GLSL compiler.
v2: Properly handle vectors
v3: Get rid of log2_denom stuff. Since we're using bcsel, we do all the
calculations anyway and this is just extra instructions.
v4:
- Add back in the log2_denom stuff since it's needed for ensuring that
the shifts don't overflow.
- Rework the looping part of the pass to be easier to expand.
Reviewed-by: Matt Turner <mattst88@gmail.com>
It's a problem waiting to happen. Individual headers should be annotated
if needed.
Signed-off-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
This tries to move comparisons (a common source of boolean values)
closer to their first use. For GPUs which use condition codes,
this can eliminate a lot of temporary booleans and comparisons
which reload the condition code register based on a boolean.
V2: (Timothy Arceri)
- fix move comparision for phis so we dont end up with:
vec1 32 ssa_227 = phi block_34: ssa_1, block_38: ssa_240
vec1 32 ssa_235 = feq ssa_227, ssa_1
vec1 32 ssa_230 = phi block_34: ssa_221, block_38: ssa_235
- add nir_op_i2b/nir_op_f2b to the list of comparisons.
V3: (Timothy Arceri)
- tidy up suggested by Jason.
- add inot/fnot to move comparison list
V4: (Jason Ekstrand)
- clean up move_comparison_source
- get rid of the tuple
- rework phi handling
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com> [v1]
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
In Vulkan, we always have both the TCS and TES available in the same
pipeline, so we can simply use the TCS OutputVertices execution mode
value as the TES PatchVertices built-in.
For GLSL, we handle this in the linker. But we could use this pass
in the case when both TCS and TES are linked together, if we wanted.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Dave Airlie <airlied@redhat.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This function returns the nir_op corresponding to the conversion between
the given nir_alu_type arguments.
This function lacks support for integer-based types with bit_size != 32
and for float16 conversion ops.
v2:
- Improve readiness of the code and delete cases that don't happen now (Jason)
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
We rename it to nir_deref_clone, re-order the sources to match the other
clone functions, and expose nir_deref_var_clone. This past part, in
particular, lets us get rid of quite a few lines since we no longer have
to call nir_copy_deref and wrap it in deref_as_var.
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
When shaders come in from SPIR-V, we handle continue blocks by placing
the contents of the continue inside of a "if (!first_iteration)". We do
this so that we can properly handle the fact that continues in SPIR-V
jump to the continue block at the end of the loop rather than jumping
directly to the top of the loop like they do in NIR. In particular, the
increment step of a simple for loop ends up in the continue block. This
pass looks for this case in loops that don't actually have any continues
and moves the continue contents to the end of the loop instead. We need
this because loop unrolling doesn't work if the increment is inside of a
condition.
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
V2:
- tidy ups suggested by Connor.
- tidy up cloning logic and handle copy propagation
based of suggestion by Connor.
- use nir_ssa_def_rewrite_uses to fix up lcssa phis
suggested by Connor.
- add support for complex loop unrolling (two terminators)
- handle case were the ssa defs use outside the loop is already a phi
- support unrolling loops with multiple terminators when trip count
is know for each terminator
V3:
- set correct num_components when creating phi in complex unroll
- rewrite update remap table based on Jasons suggestions.
- remove unrequired extract_loop_body() helper as suggested by Jason.
- simplify the lcssa phi fix up code for simple loops as per Jasons suggestions.
- use mem context to keep track of hash table memory as suggested by Jason.
- move is_{complex,simple}_loop helpers to the unroll code
- require nir_metadata_block_index
- partially rewrote complex unroll to be simpler and easier to follow.
V4:
- use rzalloc() when creating nir_phi_src but not setting pred right away
fixes regression cause by ralloc() no longer zeroing memory.
V5:
- simplify calling of complex_unroll()
- use new loop terminator fields to get the break/continue from blocks
and simplify loop unrolling code
- handle slightly less trivial loop terminators. if branches can
now have instructions but can only contain a single block.
- use nir print type IR snippets in unroll function descriptions
- add better explanation and variable for why we need to clone
additional times when the second terminator it the limiting
terminator.
- partially convert out of ssa before unrolling loops (suggested by Jason)
v6:
- remove unused nir_builder
- use Jasons new from ssa helper
- tidy/fixup cursor use
- unroll terminators that contain control flow correctly
- unroll complex loops with control flow before the terminators
correctly
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
V2: Do a "depth first search" to convert to LCSSA
V3: Small comment fixup
V4: Rebase, adapt to removal of function overloads
V5: Rebase, adapt to relocation of nir to compiler/nir
Still need to adapt to potential if-uses
Work around nir_validate issue
V6 (Timothy):
- tidy lcssa and stop leaking memory
- dont rewrite the src for the lcssa phi node
- validate lcssa phi srcs to avoid postvalidate assert
- don't add new phi if one already exists
- more lcssa phi validation fixes
- Rather than marking ssa defs inside a loop just mark blocks inside
a loop. This is simpler and fixes lcssa for intrinsics which do
not have a destination.
- don't create LCSSA phis for loops we won't unroll
- require loop metadata for lcssa pass
- handle case were the ssa defs use outside the loop is already a phi
V7: (Timothy)
- pass indirect mask to metadata call
v8: (Timothy)
- make convert to lcssa a helper function rather than a nir pass
- replace inside loop bitset with on the fly block index logic.
- remove lcssa phi validation special cases
- inline code from useless helpers, suggested by Jason.
- always do lcssa on loops, suggested by Jason.
- stop making lcssa phis special. Add as many source as the block
has predecessors, suggested by Jason.
V9: (Timothy)
- fix regression with the is_lcssa_phi field not being initialised
to false now that ralloc() doesn't zero out memory.
V10: (Timothy)
- remove extra braces in SSA example, pointed out by Topi
V11: (Timothy)
- add missing support for LCSSA phis in if conditions.
V12: (Timothy)
- small tidy up suggested by Jason.
- always create lcssa phi even if it just points to an lcssa
phi from an inner loop
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This pass detects induction variables and calculates the
trip count of loops to be used for loop unrolling.
V2: Rebase, adapt to removal of function overloads
V3: (Timothy Arceri)
- don't try to find trip count if loop terminator conditional is a phi
- fix trip count for do-while loops
- replace conditional type != alu assert with return
- disable unrolling of loops with continues
- multiple fixes to memory allocation, stop leaking and don't destroy
structs we want to use for unrolling.
- fix iteration count bugs when induction var not on RHS of condition
- add FIXME for && conditions
- calculate trip count for unsigned induction/limit vars
V4: (Timothy Arceri)
- count instructions in a loop
- set the limiting_terminator even if we can't find the trip count for
all terminators. This is needed for complex unrolling where we handle
2 terminators and the trip count is unknown for one of them.
- restruct structs so we don't keep information not required after
analysis and remove dead fields.
- force unrolling in some cases as per the rules in the GLSL IR pass
V5: (Timothy Arceri)
- fix metadata mask value 0x10 vs 0x16
V6: (Timothy Arceri)
- merge loop_variable and nir_loop_variable structs and lists suggested by Jason
- remove induction var hash table and store pointer to induction information in
the loop_variable suggested by Jason.
- use lowercase list_addtail() suggested by Jason.
- tidy up init_loop_block() as per Jasons suggestions.
- replace switch with nir_op_infos[alu->op].num_inputs == 2 in
is_var_basic_induction_var() as suggested by Jason.
- use nir_block_last_instr() in and rename foreach_cf_node_ex_loop() as suggested
by Jason.
- fix else check for is_trivial_loop_terminator() as per Connors suggetions.
- simplify offset for induction valiables incremented before the exit conditions is
checked.
- replace nir_op_isub check with assert() as it should have been lowered away.
V7: (Timothy Arceri)
- use rzalloc() on nir_loop struct creation. Worked previously because ralloc()
was broken and always zeroed the struct.
- fix cf_node_find_loop_jumps() to find jumps when loops contain
nested if statements. Code is tidier as a result.
V8: (Timothy Arceri)
- move is_trivial_loop_terminator() to nir.h so we can use it to assert is
the loop unroll pass
- fix analysis to not bail when looking for terminator when the break is in the else
rather then the if
- added new loop terminator fields: break_block, continue_from_block and
continue_from_then so we don't have to gather these when doing unrolling.
- get correct array length when forcing unrolling of variables
indexed arrays that are the same size as the iteration count
- add support for induction variables of type float
- update trival loop terminator check to allow an if containing
instructions as long as both branches contain only a single
block.
V9: (Timothy)
- bunch of tidy ups and simplifications suggested by Jason.
- rewrote trivial terminator detection, now the only restriction is there
must be no nested jumps, anything else goes.
- rewrote the iteration test to use nir_eval_const_opcode().
- count instruction properly even when forcing an unroll.
- bunch of other tidy ups and simplifications.
V10: (Timothy)
- some trivial tidy ups suggested by Jason.
- conditional fix for break inside continue branch by Jason.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
These are designed for use within an optimization pass when SSA becomes
more pain than it's worth. They're very naive and don't generate
anything close to optimal register-based NIR. Also, they may result in
shaders which do not validate because of, for instance, registers in phi
sources. However, the register-based into-SSA pass should be pretty
efficient at cleaning up the mess.
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
This is ported from the Intel lowering pass that we use with GLSL IR.
This takes care of lowering texture gradients on shadow samplers other
than cube maps. Intel hardware requires this for gen < 8.
v2 (Ken):
- Use the helper function to retrieve ddx/ddy
- Swizzle away size components we are not interested in
v3:
- Get rid of the ddx/ddy helper and use nir_tex_instr_src_index
instead (Ken, Eric)
v4:
- Add a 'continue' statement if the lowering makes progress because it
replaces the original texture instruction
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> (v3)
This is ported from the Intel lowering pass that we use with GLSL IR.
The NIR pass only handles cube maps, not shadow samplers, which are
also lowered for gen < 8 on Intel hardware. We will add support for
that in a later patch, at which point we should be able to remove
the GLSL IR lowering pass.
v2:
- added a helper to retrieve ddx/ddy parameters (Ken)
- No need to make size.z=1.0, we are only using component x anyway (Iago)
v3:
- Get rid of the ddx/ddy helper and use nir_tex_instr_src_index
instead (Ken, Eric)
v4:
- When emitting the textureLod operation, copy all texture parameters
from the original textureGrad() (except for ddx/ddy) using a loop
- Add a 'continue' statement if the lowering makes progress because it
replaces the original texture instruction
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> (v3)
git grep -l comparitor | xargs sed -i 's/comparitor/comparator/g'
Just happened to notice this in a patch that was sent and included one
of the tokens in question.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Acked-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
All of these are happily set from glsl_to_nir or spirv_to_nir but their
values are never used for anything.
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Constant initializers have been a constant (ha!) pain for quite some time.
While they're useful from a language perspective, people writing passes or
backends really don't want deal with them most of the time. This commit
removes most of the constant initializer support from NIR. It is expected
that you call nir_lower_constant_initializers VERY EARLY to ensure that
they're gone before you do anything interesting.
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
This has bothered me for about as long as NIR has been around. Why do we
have two different unions for constants? No good reason other than one of
them is a direct port from GLSL IR.
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
v2: Use nir_is_per_vertex_io() rather than is_arrays_of_arrays().
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Certain built-in arrays, such as gl_ClipDistance[], gl_CullDistance[],
gl_TessLevelInner[], and gl_TessLevelOuter[] are specified as scalar
arrays. Normal scalar arrays are sparse - each array element usually
occupies a whole vec4 slot. However, most hardware assumes these
built-in arrays are tightly packed.
The new var->data.compact flag indicates that a scalar array should
be tightly packed, so a float[4] array would take up a single vec4
slot, and a float[8] array would take up two slots.
They are still arrays, not vec4s, however. nir_lower_io will generate
intrinsics using ARB_enhanced_layouts style component qualifiers.
v2: Add nir_validate code to enforce type restrictions.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
I want this function for nir_gather_info(), and realized it's basically
the same as the ones in nir_lower_io().
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
This is ported from GLSL and converts
if (cond)
discard;
into
discard_if(cond);
This removes a block, but also is needed by radv
to workaround a bug in the LLVM backend.
v2: handle if (a) discard_if(b) (nha)
cleanup and drop pointless loop (Matt)
make sure there are no dependent phis (Eric)
v3: make sure only one instruction in the then block.
v4: remove sneaky tabs, add cursor init (Eric)
Reviewed-by: Eric Anholt <eric@anholt.net>
Cc: "13.0" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
As of 59864e8e02 we just use the location assigned by the front-end and
no longer need this for i965.
Since there were some issues in the logic with assigning arrays the same
driver location if they didn't start at the same location just remove it
and let other drivers implement a solution if needed when they add
ARB_enhanced_layouts support.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
When restoring something from shader cache we won't have and don't
want to create a nir_shader this change detaches the two.
There are other advantages such as being able to reuse the
shader info populated by GLSL IR.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This will allow use to stop copying values between structs and
will also simplify handling handling these values in the shader cache.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Now that the NIR casting functions have type assertions, we have a bunch of
assertions that aren't needed anymore.
Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
One of NIR's invariants is that control flow lists always start and end
with blocks. There's no good reason why we should return a cf_node from
these functions since we know that it's always a block. Making it a block
lets us remove a bunch of code.
Signed-off-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>