nir/opcodes: Make ldexp take an explicitly 32-bit int

There is no sense in having the double version of ldexp take a 64-bit
integer.  Instead, let's just take a 32-bit int all the time.  This also
matches what GLSL does where both variants of ldexp take a regular integer
for the exponent argument.

Reviewed-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
This commit is contained in:
Jason Ekstrand 2016-04-27 11:12:44 -07:00
parent bee40dd730
commit f0af5b87ec
2 changed files with 2 additions and 2 deletions

View File

@ -571,7 +571,7 @@ else
dst = ((1u << bits) - 1) << offset;
""")
opcode("ldexp", 0, tfloat, [0, 0], [tfloat, tint], "", """
opcode("ldexp", 0, tfloat, [0, 0], [tfloat, tint32], "", """
dst = (bit_size == 64) ? ldexp(src0, src1) : ldexpf(src0, src1);
/* flush denormals to zero. */
if (!isnormal(dst))

View File

@ -410,7 +410,7 @@ def ldexp32(f, exp):
pow2_2 = fexp2i(('isub', exp, ('ishr', exp, 1)))
return ('fmul', ('fmul', f, pow2_1), pow2_2)
optimizations += [(('ldexp', 'x', 'exp'), ldexp32('x', 'exp'))]
optimizations += [(('ldexp@32', 'x', 'exp'), ldexp32('x', 'exp'))]
# Unreal Engine 4 demo applications open-codes bitfieldReverse()
def bitfield_reverse(u):