Commit Graph

68935 Commits

Author SHA1 Message Date
Kenneth Graunke 201aef9d13 i965/fp: Emit discard jumps.
This should improve the performance of any shaders using the KIL
instruction.  I'm a bit surprised we missed this.

Unfortunately, I have not been able to measure any performance
improvements from this patch.  It does make ARB_fragment_program
behave similarly to GLSL code.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
2015-03-19 16:14:51 -07:00
Kenneth Graunke 8a0946f3b1 i965/fs: Make an emit_discard_jump() function to reduce duplication.
This is already copied in two places, and I want to copy it to a third
place.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Carl Worth <cworth@cworth.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
2015-03-19 16:14:51 -07:00
Laura Ekstrand 09bfa326a9 main: Add TEXTURE_CUBE_MAP support in CopyTextureSubImage3D.
So it turns out that this doesn't actually fix any bugs or add any features,
stictly speaking. However, it does avoid a lot of kludginess.  Previously, if
you called

glCopyTextureSubImage3D(texcube, 0, 0, 0, zoffset = 3, ...

it would grab the texture image object for face = 0 in teximage.c instead of
the desired face = 3.  But Line 274 of brw_blorp_blit.cpp would correct for
this by updating the slice to 3.

This commit does the correct thing before calling any drivers,
which should make the functionality much more robust and uniform across all
drivers.

Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
2015-03-19 16:07:57 -07:00
Laura Ekstrand 037e36a8aa main: Simplify debug messages for CopyTex*SubImage*D.
Reviewed-by: Martin Peres <martin.peres@linux.intel.com>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
2015-03-19 16:07:44 -07:00
Ian Romanick a44b95cd57 glsl: Annotate as_foo functions that the this pointer cannot be NULL
We use the idiom

   ir_foo *x = y->as_foo();
   if (x == NULL)
      return;

all over the place.  GCC generates some quite lovely code for this.
One such example:

  340a5b:       83 7d 18 04             cmpl   $0x4,0x18(%rbp)
  340a5f:       0f 85 06 04 00 00       jne    340e6b
  340a65:       48 85 ed                test   %rbp,%rbp
  340a68:       0f 84 fd 03 00 00       je     340e6b

This case used as_expression() (ir_type_expression is 4).  Note that it
checks the ir_type, then checks that the pointer isn't NULL.  There is
some disconnect in GCC around the condition in the as_foo functions.

      return ir_type == ir_type_##TYPE ? (ir_##TYPE *) this : NULL; \

It believes "this" could be NULL, so it emits check outside the function
just for fun.

This patch uses assume() to tell GCC that it need not bother with extra
NULL checking of the pointer returned by the as_foo functions.

   text	   data	    bss	    dec	    hex	filename
4836430	 158688	  26248	5021366	 4c9eb6	i965_dri-before.so
4836173	 158688	  26248	5021109	 4c9db5	i965_dri-after.so

v2: Replace 'if (this == NULL) unreachable("this cannot be NULL")' with
assume(this != NULL).  Suggested by Ilia Mirkin.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
2015-03-19 15:35:42 -07:00
Paul Berry bf9d921936 main: Change the type argument of use_shader_program() to gl_shader_stage.
This allows it to be called from a loop.

Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kristian Høgsberg <krh@bitplanet.net>
2015-03-19 13:38:51 -07:00
Paul Berry 57b2652322 main: Clean up a strange construction in use_shader_program().
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
2015-03-19 13:38:51 -07:00
Jason Ekstrand 46c35c61e9 i965/nir: Sort uniforms direct-first and use two different uniform registers
Previously, we put all the uniforms into one big array.  The problem with
this approach is that, as soon as there was one indirect array acces, the
backend would decide that the entire large array should be pull constants.
This commit splits the array in half: first direct-only uniforms and then
potentially-indirect uniforms.  This may not be optimal, but it does let
the backend promote things to push constants.

Shader-db results on HSW:
total instructions in shared programs: 4114840 -> 4112172 (-0.06%)
instructions in affected programs:     43316 -> 40648 (-6.16%)
helped:                                116
HURT:                                  0

v2: Set param_size[num_direct_uniforms] only if we have indirect uniforms.
    This caused a bug that, strangely enough, only showed up on Broadwell
    vertex shaders.

Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2015-03-19 13:18:39 -07:00
Jason Ekstrand 8a33f95b7a nir/lower_io: Add a assign_locations function that sorts by [in]direct use
v2: Delete the set of indirectly accessed variables when we're done with it
v3: Rename from _packed to _scalar

Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2015-03-19 13:18:39 -07:00
Jason Ekstrand 25db44a845 nir/lower_io: Make variable location assignment a manual operation
Previously, we just assigned variable locations in nir_lower_io.  Now, we
force the user to assign variable locations for us.  This gives the backend
a bit more control over where variables are placed.

v2: Rename from _packed to _scalar

Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2015-03-19 13:18:39 -07:00
Jason Ekstrand 639115123e nir: Use a list instead of a hash_table for inputs, outputs, and uniforms
We never did a single hash table lookup in the entire NIR code base that I
found so there was no real benifit to doing it that way.  I suppose that
for linking, we'll probably want to be able to lookup by name but we can
leave building that hash table to the linker.  In the mean time this was
causing problems with GLSL IR -> NIR because GLSL IR doesn't guarantee us
unique names of uniforms, etc.  This was causing massive rendering isues in
the unreal4 Sun Temple demo.

Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2015-03-19 13:18:38 -07:00
Brian Paul 8f255f948b gallivm: remove unused 'builder' variable
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
2015-03-19 12:56:35 -06:00
Brian Paul 1cd3745911 mesa: use more descriptive error messages for glUniform errors
Different errors for type mismatches, size mismatches and matrix/
non-matrix mismatches.  Use a common format of "uniformName"@location
in the messags.

Reviewed-by: Martin Peres <martin.peres@linux.intel.com>
2015-03-19 12:56:35 -06:00
Matt Turner b0d422cd2a i965/fs: Print spills:fills and number of promoted constants.
Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
Reviewed-by: Chris Forbes <chrisf@ijw.co.nz>
2015-03-19 11:15:57 -07:00
Ian Romanick b616164c95 i965/fs: Emit better b2f of an expression on GEN4 and GEN5
On platforms that do not natively generate 0u and ~0u for Boolean
results, b2f expressions that look like

    f = b2f(expr cmp 0)

will generate better code by pretending the expression is

    f = ir_triop_sel(0.0, 1.0, expr cmp 0)

This is because the last instruction of "expr" can generate the
condition code for the "cmp 0".  This avoids having to do the "-(b & 1)"
trick to generate 0u or ~0u for the Boolean result.  This means code like

    mov(16)         g16<1>F         1F
    mul.ge.f0(16)   null            g6<8,8,1>F      g14<8,8,1>F
    (+f0) sel(16)   m6<1>F          g16<8,8,1>F     0F

will be generated instead of

    mul(16)         g2<1>F          g12<8,8,1>F     g4<8,8,1>F
    cmp.ge.f0(16)   g2<1>D          g4<8,8,1>F      0F
    and(16)         g4<1>D          g2<8,8,1>D      1D
    and(16)         m6<1>D          -g4<8,8,1>D     0x3f800000UD

v2: When the comparison is either == 0.0 or != 0.0 use the knowledge
that the true (or false) case already results in zero would allow better
code generation by possibly avoiding a load-immediate instruction.

v3: Apply the optimization even when neither comparitor is zero.

Shader-db results:

GM45 (0x2A42):
total instructions in shared programs: 3551002 -> 3550829 (-0.00%)
instructions in affected programs:     33269 -> 33096 (-0.52%)
helped:                                121

Iron Lake (0x0046):
total instructions in shared programs: 4993327 -> 4993146 (-0.00%)
instructions in affected programs:     34199 -> 34018 (-0.53%)
helped:                                129

No change on other platforms.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Tapani Palli <tapani.palli@intel.com>
2015-03-19 10:21:08 -07:00
Matt Turner 036e347f3c util: Optimize _mesa_roundeven with SSE 4.1.
The SSE 4.1 ROUND instructions let us implement roundeven directly.
Otherwise we assume that the rounding mode has not been modified (as we
do in the rest of Mesa) and use rint().

glibc uses the ROUND instruction in rint() after a cpuid check. This
patch just lets us inline it directly when we're already building for
SSE 4.1.

Reviewed-by: Carl Worth <cworth@cworth.org>
2015-03-18 21:06:26 -07:00
Matt Turner 5de86102f9 util: Add a roundeven test.
Reviewed-by: Carl Worth <cworth@cworth.org>
2015-03-18 21:06:26 -07:00
Matt Turner dd0d3a2c0f mesa: Replace _mesa_round_to_even() with _mesa_roundeven().
Eric's initial patch adding constant expression evaluation for
ir_unop_round_even used nearbyint. The open-coded _mesa_round_to_even
implementation came about without much explanation after a reviewer
asked whether nearbyint depended on the application not modifying the
rounding mode. Of course (as Eric commented) we rely on the application
not changing the rounding mode from its default (round-to-nearest) in
many other places, including the IROUND function used by
_mesa_round_to_even!

Worse, IROUND() is implemented using the trunc(x + 0.5) trick which
fails for x = nextafterf(0.5, 0.0).

Still worse, _mesa_round_to_even unexpectedly returns an int. I suspect
that could cause problems when rounding large integral values not
representable as an int in ir_constant_expression.cpp's
ir_unop_round_even evaluation. Its use of _mesa_round_to_even is clearly
broken for doubles (as noted during review).

The constant expression evaluation code for the packing built-in
functions also mistakenly assumed that _mesa_round_to_even returned a
float, as can be seen by the cast through a signed integer type to an
unsigned (since negative float -> unsigned conversions are undefined).

rint() and nearbyint() implement the round-half-to-even behavior we want
when the rounding mode is set to the default round-to-nearest. The only
difference between them is that nearbyint() raises the inexact
exception.

This patch implements _mesa_roundeven{f,}, a function similar to the
roundeven function added by a yet unimplemented technical specification
(ISO/IEC TS 18661-1:2014), with a small difference in behavior -- we
don't bother raising the inexact exception, which I don't think we care
about anyway.

At least recent Intel CPUs can quickly change a subset of the bits in
the x87 floating-point control register, but the exception mask bits are
not included. rint() does not need to change these bits, but nearbyint()
does (twice: save old, set new, and restore old) in order to raise the
inexact exception, which would incur some penalty.

Reviewed-by: Carl Worth <cworth@cworth.org>
2015-03-18 21:06:26 -07:00
Matt Turner bb22aa08e4 i965/fs: Ignore type in cmod prop if scan_inst is CMP.
total instructions in shared programs: 6263270 -> 6203091 (-0.96%)
instructions in affected programs:     2606529 -> 2546350 (-2.31%)
helped:                                14301
GAINED:                                5
LOST:                                  3

Revewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
2015-03-18 21:03:09 -07:00
Jason Ekstrand e1f3ddef8c i965/nir: Make our environment variable checking smarter
Before, we enabled NIR if you set INTEL_USE_NIR to anything which mean that
INTEL_USE_NIR=false would actually turn on NIR.  In preparation for turning
NIR on by default, this commit makes it smarter by allowing the
INTEL_USE_NIR variable to work as either a force-enable or a force-disable.

Reviewed-by: Mark Janes <mark.a.janes@intel.com>
2015-03-18 16:40:22 -07:00
Dave Airlie 37e3a116f8 egl: don't fill client apis string forever.
We never reset the string on eglTerminate, so it grows
for ever on multiple eglInitialise.

Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2015-03-19 08:28:38 +10:00
Jose Fonseca cebc62f106 swrast: Use BITFIELD64_BIT for arrayAttribs.
As VARYING_SLOT_MAX can be bigger than 32.

I'll probably stop building swrast with MSVC in the near future, but this
seems a real bug regardless.

Reviewed-by: Brian Paul <brianp@vmware.com>
2015-03-18 21:51:54 +00:00
Jose Fonseca d3e9aa8d88 scons: Don't link program_lexer.l/y twice.
program/lex.yy.c and program/program_parse.tab.c is already included in
the PROGRAM_FILES variable.

We still need to specify the dependency relationship though.

Reviewed-by: Brian Paul <brianp@vmware.com>
2015-03-18 21:51:54 +00:00
Jose Fonseca a56f1a8b32 gallivm: Use INFINITY directly.
Already done below.

Reviewed-by: Brian Paul <brianp@vmware.com>
2015-03-18 21:51:40 +00:00
Jose Fonseca 1d30fd85dd scons: Silence MSVC warnings about overflows in constant arithmetic.
These get triggered even when using the standard C99 INFINITY/NAN
constants.

Reviewed-by: Brian Paul <brianp@vmware.com>
2015-03-18 21:51:40 +00:00
José Fonseca bbac03ecca scons: Disable MSVC signed/unsigned mismatch warnings.
By default gcc ignores the issue, and as result code that mixes
signed/unsigned is so widespread through the code base that it ends up
being little more than noise, potentially obscuring more pertinent
warnings.

Maybe one day we enable the corresponding gcc warnings and cleanup, but
until then, this change disables them.

Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
2015-03-18 21:51:40 +00:00
Laura Ekstrand 2ccfce3f4c docs: Update progress on ARB_direct_state_access.
Acked-by: Matt Turner <mattst88@gmail.com>
2015-03-18 13:59:39 -07:00
Brian Paul 627991dbf7 dri: add _glapi_set_nop_handler(), _glapi_new_nop_table() to dri_test.c
I wasn't aware of these _glapi_ stub functions when I committed
4bdbb588a9.  Fixes "make check"

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=89662
Reviewed-by: Mark Janes <mark.a.janes@intel.com>
2015-03-18 12:46:11 -06:00
Brian Paul 9263986401 mesa: remove MSVC warning pragmas
Removing this block of pragmas doesn't seem to increase the number of
warning generated by MSVC.  Other than signed/unsigned comparison warnings
there's very few other warnings nowadays.

Acked-by: Matt Turner <mattst88@gmail.com>
2015-03-18 09:01:50 -06:00
Brian Paul ea1b066a34 mesa: add void to format_array_format_table_init() declaration
Silences an MSVC warning where it's called from call_once().

Reviewed-by: Matt Turner <mattst88@gmail.com>
2015-03-18 09:01:50 -06:00
Brian Paul 9fbbd60c1d mapi: move some #includes from .h file to .c files
Just include things where they're needed.

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
2015-03-18 09:01:50 -06:00
Brian Paul 4009d22b61 mesa: make _mesa_alloc_dispatch_table() static
Never called from outside of context.c

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
2015-03-18 09:01:50 -06:00
Brian Paul 4bdbb588a9 mesa: reimplement dispatch table no-op function handling
Use the new _glapi_new_nop_table() and _glapi_set_nop_handler() to
improve how we handle calling no-op GL functions.

If there's a current context for the calling thread, generate a
GL_INVALID_OPERATION error.  This will happen if the app calls an
unimplemented extension function or it calls an illegal function
between glBegin/glEnd.

If there's no current context, print an error to stdout if it's a debug
build.

The dispatch_sanity.cpp file has some previous checks removed since
the _mesa_generic_nop() function no longer exists.

This fixes the piglit gl-1.0-dlist-begin-end and gl-1.0-beginend-coverage
tests on Windows.

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
2015-03-18 09:01:50 -06:00
Brian Paul 201e36e77d mapi: add new _glapi_new_nop_table() and _glapi_set_nop_handler()
_glapi_new_nop_table() creates a new dispatch table populated with
pointers to no-op functions.

_glapi_set_nop_handler() is used to register a callback function which
will be called from each of the no-op functions.

Now we always generate a separate no-op function for each GL entrypoint.
This allows us to do proper stack clean-up for Windows __stdcall and
lets us report the actual function name in error messages.  Before this
change, for non-Windows release builds we used a single no-op function
for all entrypoints.

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
2015-03-18 09:01:50 -06:00
Rob Clark aee26d292f freedreno/ir3: fix infinite recursion in sched
One more case we need to handle.  One of the src instructions for the
indirect could also end up being ourself.

Signed-off-by: Rob Clark <robclark@freedesktop.org>
2015-03-18 10:42:33 -04:00
Rob Clark 62cc003b7d freedreno: fix spelling
Signed-off-by: Rob Clark <robclark@freedesktop.org>
2015-03-18 10:42:33 -04:00
Marek Olšák 42715ad793 docs/GL3: don't list nv30
Suggested by Ilia Mirkin.
2015-03-18 12:04:27 +01:00
Marek Olšák 4e46af0195 docs/GL3: don't list swrast
Let's face it: This driver is unlikely to get more love.

Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
2015-03-18 12:04:27 +01:00
Marek Olšák 2b5379651f docs/GL3: don't list r300
r300g already supports everything it can. There's no point in listing
the driver here.

Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
2015-03-18 12:04:27 +01:00
Marek Olšák a984abdad3 radeonsi: increase coords array size for radeon_llvm_emit_prepare_cube_coords
radeon_llvm_emit_prepare_cube_coords uses coords[4] in some cases (TXB2 etc.)

Discovered by Coverity. Reported by Ilia Mirkin.

Cc: 10.5 10.4 <mesa-stable@lists.freedesktop.org>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
2015-03-18 12:04:27 +01:00
Jonathan Gray 8475526a38 configure: check if compiler supports -Werror=vla.
Check if the compiler supports -Werror=vla before using it.
-Wvla was introduced with GCC 4.3 and is not present in 4.2.
Fixes the build on OpenBSD.

v2: Fix statement order, and quote $save_CFLAGS.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=89433
Signed-off-by: Jonathan Gray <jsg@jsg.id.au>
Signed-off-by: Jose Fonseca <jfonseca@vmware.com>
2015-03-18 10:53:20 +00:00
Chris Wilson eeb504e0ae i965: Defer the throttle until we submit new commands
Currently, we throttle before the user begins preparing commands for the
next frame when we acquire the draw/read buffers. However, construction
of the command buffer can itself take significant time relative to the
frame time. If we move the throttle from the buffer acquire to the
command submit phase we can allow the user to improve concurrency
between the CPU and GPU (i.e. reduce the amount of time we waste inside
the throttle).

v2: Whitespace + delay throttling until after the next submission for
greater parallelism

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Cc: Ben Widawsky <ben@bwidawsk.net>
Cc: Kristian Høgsberg <krh@bitplanet.net>
Cc: Chad Versace <chad.versace@linux.intel.com>
Cc: Ian Romanick <idr@freedesktop.org>
Reviewed-by: Chad Versace <chad.versace@linux.intel.com> [v1]
2015-03-18 09:33:33 +00:00
Chris Wilson 64788b2e8d i965: Throttle to the previous frame
In order to facilitate the concurrency offered by triple buffering and to
offset the latency induced by swapping via an external process, which
may incur extra rendering itself, only throttle to the previous frame
and not the last. The second issue that mostly affects swap benchmarks,
but also can incur jitter in the throttling, is that the throttle bo is
closer to the next SwapBuffers rather than immediately after the previous
SwapBuffers. Throttling to the previous frame doubles the maximum possible
latency at the benefit of improving throughput and reducing jitter.

v2: Rename "first_post_swapbuffer" batches array to a plain
throttle_batch[] as the pluralisation was contorting the name and not
making it clear as to whether it was the first batch or first_post_swap
batch. Not least of which was that not all throttle points are SwapBuffers.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Cc: Ben Widawsky <ben@bwidawsk.net>
Cc: Kristian Høgsberg <krh@bitplanet.net>
Cc: Chad Versace <chad.versace@linux.intel.com>
Cc: Ian Romanick <idr@freedesktop.org>
Reviewed-by: Chad Versace <chad.versace@linux.intel.com>
2015-03-18 09:33:33 +00:00
Chris Wilson 8b9bd19021 i965: Throttle rendering to an fbo
When rendering to an fbo, even though it may be acting as a winsys
frontbuffer or just generally, we never throttle. However, when rendering
to an fbo, there is no natural frame boundary. Conventionally we use
SwapBuffers and glFinish, but potential callers avoid often glFinish for
being too heavy handed (waiting on all outstanding rendering to complete).
The kernel provides a soft-throttling option for this case that waits for
rendering older than 20ms to be complete (that's a little too lax to be
used for swapbuffers, but is here a useful safety net). The remaining
choice is then either never to throttle, throttle after every draw call,
or at after intermediate user defined point such as glFlush and thus all the
implied flushes. This patch opts for the latter as that is the current
method used for flushing to front buffers.

v2: Defer the throttling from inside the flush to the next
intel_prepare_render() and switch non-fbo frontbuffer throttling over to
use the same lax method. The issuing being that
glFlush()/intel_prepare_read() is just as likely to be called inside a
tight loop and not at "frame" boundaries.

v3: Rename from need_front_throttle to need_flush_throttle to avoid any
ambiguity between front buffer rendering and fbo rendering. (Chad)

v4: Whitespace

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Cc: Ben Widawsky <ben@bwidawsk.net>
Cc: Kristian Høgsberg <krh@bitplanet.net>
Cc: Chad Versace <chad.versace@linux.intel.com>
Cc: Ian Romanick <idr@freedesktop.org>
Reviewed-by: Chad Versace <chad.versace@linux.intel.com>
2015-03-18 09:33:33 +00:00
Jason Ekstrand 27bf37ba05 nir/peephole_select: Allow uniform/input loads and load_const
Shader-db results on HSW:

total instructions in shared programs: 4174156 -> 4157291 (-0.40%)
instructions in affected programs:     145397 -> 128532 (-11.60%)
helped:                                383
HURT:                                  0
GAINED:                                20
LOST:                                  22

There are two more tests lost than gained.  However, comparing this with
GLSL IR vs. NIR results, the overall delta is reduced from 85/44
gained/lost on current master to 71/32 with this commit.  Therefore, I
think it's probably a boon since we are getting "closer" to where we were
before.

Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2015-03-17 17:11:05 -07:00
Jason Ekstrand 1be862c0c4 nir/peephole_select: Copy instructions into the block before the if
Previously we tried to do poor-man's copy propagation as we created the
select instructions.  Instead, this commit just moves the instructions from
the blocks inside the if into the block before.  Copy propagation will take
care of making sure we don't have any extra mov's in there for us.

Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2015-03-17 17:11:05 -07:00
Jason Ekstrand 8cf40ed05d nir/peephole_select: Rename are_all_move_to_phi and use a switch
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2015-03-17 17:11:05 -07:00
Mario Kleiner cc5ddd584d glx: Handle out-of-sequence swap completion events correctly. (v2)
The code for emitting INTEL_swap_events swap completion
events needs to translate from 32-Bit sbc on the wire to
64-Bit sbc for the events and handle wraparound accordingly.

It assumed that events would be sent by the server in the
order their corresponding swap requests were emitted from
the client, iow. sbc count should be always increasing. This
was correct for DRI2.

This is not always the case under the DRI3/Present backend,
where the Present extension can execute presents and send out
completion events in a different order than the submission
order of the present requests, due to client code specifying
targetMSC target vblank counts which are not strictly
monotonically increasing. This confused the wraparound
handling. This patch fixes the problem by handling 32-Bit
wraparound in both directions. As long as successive swap
completion events real 64-Bit sbc's don't differ by more
than 2^30, this should be able to do the right thing.

How this is supposed to work:

awire->sbc contains the low 32-Bits of the true 64-Bit sbc
of the current swap event, transmitted over the wire.

glxDraw->lastEventSbc contains the low 32-Bits of the 64-Bit
sbc of the most recently processed swap event.

glxDraw->eventSbcWrap is a 64-Bit offset which tracks the upper
32-Bits of the current sbc. The final 64-Bit output sbc
aevent->sbc is computed from the sum of awire->sbc and
glxDraw->eventSbcWrap.

Under DRI3/Present, swap completion events can be received
slightly out of order due to non-monotic targetMsc specified
by client code, e.g., present request submission:

Submission sbc:   1   2   3
targetMsc:        10  11  9

Reception of completion events:
Completion sbc:   3   1   2

The completion sequence 3, 1, 2 would confuse the old wraparound
handling made for DRI2 as 1 < 3 --> Assumes a 32-Bit wraparound
has happened when it hasn't.

The client can queue multiple present requests, in the case of
Mesa up to n requests for n-buffered rendering, e.g., n =  2-4 in
the current Mesa GLX DRI3/Present implementation. In the case of
direct Pixmap presents via xcb_present_pixmap() the number n is
limited by the amount of memory available.

We reasonably assume that the number of outstanding requests n is
much less than 2 billion due to memory contraints and common sense.
Therefore while the order of received sbc's can be a bit scrambled,
successive 64-Bit sbc's won't deviate by much, a given sbc may be
a few counts lower or higher than the previous received sbc.

Therefore any large difference between the incoming awire->sbc and
the last recorded glxDraw->lastEventSbc will be due to 32-Bit
wraparound and we need to adapt glxDraw->eventSbcWrap accordingly
to adjust the upper 32-Bits of the sbc.

Two cases, correponding to the two if-statements in the patch:

a) Previous sbc event was below the last 2^32 boundary, in the previous
glxDraw->eventSbcWrap epoch, the new sbc event is in the next 2^32
epoch, therefore the low 32-Bit awire->sbc wrapped around to zero,
or close to zero --> awire->sbc is apparently much lower than the
glxDraw->lastEventSbc recorded for the previous epoch

--> We need to increment glxDraw->eventSbcWrap by 2^32 to adjust
the current epoch to be one higher than the previous one.

--> Case a) also handles the old DRI2 behaviour.

b) Previous sbc event was above closest 2^32 boundary, but now a
late event from the previous 2^32 epoch arrives, with a true sbc
that belongs to the previous 2^32 segment, so the awire->sbc of
this late event has a high count close to 2^32, whereas
glxDraw->lastEventSbc is closer to zero --> awire->sbc is much
greater than glXDraw->lastEventSbc.

--> We need to decrement glxDraw->eventSbcWrap by 2^32 to adjust
the current epoch back to the previous lower epoch of this late
completion event.

We assume such a wraparound to a higher (a) epoch or lower (b)
epoch has happened if awire->sbc and glxDraw->lastEventSbc differ
by more than 2^30 counts, as such a difference can only happen
on wraparound, or if somehow 2^30 present requests would be pending
for a given drawable inside the server, which is rather unlikely.

v2: Explain the reason for this patch and the new wraparound handling
    much more extensive in commit message, no code change wrt. initial
    version.

Cc: "10.3 10.4 10.5" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
2015-03-17 23:54:02 +00:00
Emil Velikov 3f94a5afcb r600g: constify r600_shader_tgsi_instruction lists.
Massive list of constant data. Annotate it as such.

Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2015-03-17 23:52:39 +00:00
Emil Velikov 63cf2b4448 r600g: kill off r600_shader_tgsi_instruction::{tgsi_opcode,is_op3}
Both of which are no longer used. Use designated initializer to make
things obvious as people add/remove TGSI_OPCODEs.

Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2015-03-17 23:52:35 +00:00