st/nir: Move 64-bit lowering later

Now that we have a loop unrolling cost function and loop unrolling isn't
going to kill us the moment we have a 64-bit op in a loop, we can go
ahead and move 64-bit lowering later.  This gives us the opportunity to
do more optimizations and actually let the full optimizer run even on
64-bit ops rather than hoping one round of opt_algebraic will fix
everything.  This substantially reduces both fp64 shader compile times
and the resulting code size.

Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This commit is contained in:
Jason Ekstrand 2019-03-04 17:02:39 -06:00 committed by Jason Ekstrand
parent 656ace3dd8
commit 9ab1b1d022
1 changed files with 5 additions and 2 deletions

View File

@ -410,6 +410,8 @@ st_glsl_to_nir(struct st_context *st, struct gl_program *prog,
NIR_PASS_V(nir, nir_lower_alu_to_scalar);
}
st_nir_opts(nir, is_scalar);
if (lower_64bit) {
bool lowered_64bit_ops = false;
bool progress = false;
@ -429,9 +431,10 @@ st_glsl_to_nir(struct st_context *st, struct gl_program *prog,
NIR_PASS(progress, nir, nir_opt_algebraic);
lowered_64bit_ops |= progress;
} while (progress);
}
st_nir_opts(nir, is_scalar);
if (progress)
st_nir_opts(nir, is_scalar);
}
return nir;
}