Commit Graph

87767 Commits

Author SHA1 Message Date
Timothy Arceri 51daccb289 nir: add a loop unrolling pass
V2:
- tidy ups suggested by Connor.
- tidy up cloning logic and handle copy propagation
 based of suggestion by Connor.
- use nir_ssa_def_rewrite_uses to fix up lcssa phis
  suggested by Connor.
- add support for complex loop unrolling (two terminators)
- handle case were the ssa defs use outside the loop is already a phi
- support unrolling loops with multiple terminators when trip count
  is know for each terminator

V3:
- set correct num_components when creating phi in complex unroll
- rewrite update remap table based on Jasons suggestions.
- remove unrequired extract_loop_body() helper as suggested by Jason.
- simplify the lcssa phi fix up code for simple loops as per Jasons suggestions.
- use mem context to keep track of hash table memory as suggested by Jason.
- move is_{complex,simple}_loop helpers to the unroll code
- require nir_metadata_block_index
- partially rewrote complex unroll to be simpler and easier to follow.

V4:
- use rzalloc() when creating nir_phi_src but not setting pred right away
 fixes regression cause by ralloc() no longer zeroing memory.

V5:
- simplify calling of complex_unroll()
- use new loop terminator fields to get the break/continue from blocks
  and simplify loop unrolling code
- handle slightly less trivial loop terminators. if branches can
  now have instructions but can only contain a single block.
- use nir print type IR snippets in unroll function descriptions
- add better explanation and variable for why we need to clone
  additional times when the second terminator it the limiting
  terminator.
- partially convert out of ssa before unrolling loops (suggested by Jason)

v6:
- remove unused nir_builder
- use Jasons new from ssa helper
- tidy/fixup cursor use
- unroll terminators that contain control flow correctly
- unroll complex loops with control flow before the terminators
  correctly

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2016-12-23 10:15:36 +11:00
Timothy Arceri f8407a5398 nir: add helper for cloning nir_cf_list
V2:
- updated to create a generic list clone helper nir_cf_list_clone()
- continue to assert on clone when fallback flag not set as suggested
  by Jason.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2016-12-23 10:15:36 +11:00
Timothy Arceri b84dfa0f62 nir: update fixup_phi_srcs() to handle registers
We need to do this because we partially get out of SSA when unrolling
and cloning loops.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2016-12-23 10:15:36 +11:00
Timothy Arceri d781320974 nir: create helper for fixing phi srcs when cloning
This will be useful for fixing phi srcs when cloning a loop body
during loop unrolling.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2016-12-23 10:15:36 +11:00
Thomas Helland ec8423a4b1 nir: Add a LCSAA-pass
V2: Do a "depth first search" to convert to LCSSA

V3: Small comment fixup

V4: Rebase, adapt to removal of function overloads

V5: Rebase, adapt to relocation of nir to compiler/nir
    Still need to adapt to potential if-uses
    Work around nir_validate issue

V6 (Timothy):
 - tidy lcssa and stop leaking memory
 - dont rewrite the src for the lcssa phi node
 - validate lcssa phi srcs to avoid postvalidate assert
 - don't add new phi if one already exists
 - more lcssa phi validation fixes
 - Rather than marking ssa defs inside a loop just mark blocks inside
   a loop. This is simpler and fixes lcssa for intrinsics which do
   not have a destination.
 - don't create LCSSA phis for loops we won't unroll
 - require loop metadata for lcssa pass
 - handle case were the ssa defs use outside the loop is already a phi

V7: (Timothy)
- pass indirect mask to metadata call

v8: (Timothy)
- make convert to lcssa a helper function rather than a nir pass
- replace inside loop bitset with on the fly block index logic.
- remove lcssa phi validation special cases
- inline code from useless helpers, suggested by Jason.
- always do lcssa on loops, suggested by Jason.
- stop making lcssa phis special. Add as many source as the block
  has predecessors, suggested by Jason.

V9: (Timothy)
- fix regression with the is_lcssa_phi field not being initialised
  to false now that ralloc() doesn't zero out memory.

V10: (Timothy)
- remove extra braces in SSA example, pointed out by Topi

V11: (Timothy)
- add missing support for LCSSA phis in if conditions.

V12: (Timothy)
- small tidy up suggested by Jason.
- always create lcssa phi even if it just points to an lcssa
  phi from an inner loop

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2016-12-23 10:15:36 +11:00
Thomas Helland 6772a17acc nir: Add a loop analysis pass
This pass detects induction variables and calculates the
trip count of loops to be used for loop unrolling.

V2: Rebase, adapt to removal of function overloads

V3: (Timothy Arceri)
 - don't try to find trip count if loop terminator conditional is a phi
 - fix trip count for do-while loops
 - replace conditional type != alu assert with return
 - disable unrolling of loops with continues
 - multiple fixes to memory allocation, stop leaking and don't destroy
   structs we want to use for unrolling.
 - fix iteration count bugs when induction var not on RHS of condition
 - add FIXME for && conditions
 - calculate trip count for unsigned induction/limit vars

V4: (Timothy Arceri)
- count instructions in a loop
- set the limiting_terminator even if we can't find the trip count for
 all terminators. This is needed for complex unrolling where we handle
 2 terminators and the trip count is unknown for one of them.
- restruct structs so we don't keep information not required after
 analysis and remove dead fields.
- force unrolling in some cases as per the rules in the GLSL IR pass

V5: (Timothy Arceri)
- fix metadata mask value 0x10 vs 0x16

V6: (Timothy Arceri)
- merge loop_variable and nir_loop_variable structs and lists suggested by Jason
- remove induction var hash table and store pointer to induction information in
  the loop_variable suggested by Jason.
- use lowercase list_addtail() suggested by Jason.
- tidy up init_loop_block() as per Jasons suggestions.
- replace switch with nir_op_infos[alu->op].num_inputs == 2 in
  is_var_basic_induction_var() as suggested by Jason.
- use nir_block_last_instr() in and rename foreach_cf_node_ex_loop() as suggested
  by Jason.
- fix else check for is_trivial_loop_terminator() as per Connors suggetions.
- simplify offset for induction valiables incremented before the exit conditions is
  checked.
- replace nir_op_isub check with assert() as it should have been lowered away.

V7: (Timothy Arceri)
- use rzalloc() on nir_loop struct creation. Worked previously because ralloc()
  was broken and always zeroed the struct.
- fix cf_node_find_loop_jumps() to find jumps when loops contain
  nested if statements. Code is tidier as a result.

V8: (Timothy Arceri)
- move is_trivial_loop_terminator() to nir.h so we can use it to assert is
  the loop unroll pass
- fix analysis to not bail when looking for terminator when the break is in the else
  rather then the if
- added new loop terminator fields: break_block, continue_from_block and
  continue_from_then so we don't have to gather these when doing unrolling.
- get correct array length when forcing unrolling of variables
  indexed arrays that are the same size as the iteration count
- add support for induction variables of type float
- update trival loop terminator check to allow an if containing
  instructions as long as both branches contain only a single
  block.

V9: (Timothy)
 - bunch of tidy ups and simplifications suggested by Jason.
 - rewrote trivial terminator detection, now the only restriction is there
   must be no nested jumps, anything else goes.
 - rewrote the iteration test to use nir_eval_const_opcode().
 - count instruction properly even when forcing an unroll.
 - bunch of other tidy ups and simplifications.

V10: (Timothy)
 - some trivial tidy ups suggested by Jason.
 - conditional fix for break inside continue branch by Jason.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2016-12-23 10:15:36 +11:00
Timothy Arceri eda3ec7957 i965: use nir_lower_indirect_derefs() for GLSL
This moves the nir_lower_indirect_derefs() call into
brw_preprocess_nir() so thats is called by both OpenGL and Vulkan
and removes that call to the old GLSL IR pass
lower_variable_index_to_cond_assign()

We want to do this pass in nir to be able to move loop unrolling
to nir.

There is a increase of 1-3 instructions in a small number of shaders,
and 2 Kerbal Space program shaders that increase by 32 instructions.
The changes seem to be caused be the difference in the GLSL IR vs
NIR variable index lowering passes. The GLSL IR pass creates a
simple if ladder for arrays of size 4 or less, while the NIR pass
implements a binary search for all arrays regardless of size.

Shader-db results BDW:

total instructions in shared programs: 13021176 -> 13021819 (0.00%)
instructions in affected programs: 57693 -> 58336 (1.11%)
helped: 20
HURT: 190

total cycles in shared programs: 299805580 -> 299750826 (-0.02%)
cycles in affected programs: 2290024 -> 2235270 (-2.39%)
helped: 337
HURT: 442

total fills in shared programs: 19984 -> 19984 (0.00%)
fills in affected programs: 0 -> 0
helped: 0
HURT: 0

LOST:   4
GAINED: 0

V2: remove the do_copy_propagation() call from the i965 GLSL IR
linking code. This call was added in f7741c5211 but since we are
moving the variable index lowering to NIR we no longer need it and
can just rely on the nir copy propagation pass.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2016-12-23 10:15:36 +11:00
Timothy Arceri 976859ce57 i965: allow sampler indirects on all gens
Without this we will regress the max-samplers piglit test on Gen6
and lower when loop unrolling is done in NIR. There is a check
in the GLSL IR linker that errors when it finds indirects and
EmitNoIndirectSampler is set.

As far as I can tell there is no reason for not enabling this for
all gens regardless of whether they fully support ARB_gpu_shader5
or not.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2016-12-23 10:15:35 +11:00
Jason Ekstrand a620f66872 nir: Add a couple quick-and-dirty out-of-SSA helpers
These are designed for use within an optimization pass when SSA becomes
more pain than it's worth.  They're very naive and don't generate
anything close to optimal register-based NIR.  Also, they may result in
shaders which do not validate because of, for instance, registers in phi
sources.  However, the register-based into-SSA pass should be pretty
efficient at cleaning up the mess.

Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
2016-12-23 10:15:35 +11:00
Arda Coskunses 99de7b7525 vulkan/wsi/x11: don't crash on null wsi x11 connection
Without this check driver crash when application window
closed unexpectedly.

Acked-by: Edward O'Callaghan <funfunctor@folklore194.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Cc: "13.0" <mesa-stable@lists.freedesktop.org>
2016-12-22 14:09:46 -08:00
Arda Coskunses 01dd363e67 vulkan/wsi/x11: don't crash on null visual
When application window closed unexpectedly due to
lost window visualtypes getting invlaid parameters
which is causing a crash. Necessary check is added
to prevent the crash.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Cc: "13.0" <mesa-stable@lists.freedesktop.org>
2016-12-22 14:09:34 -08:00
Christian Inci 7a4ea95f1c radeonsi: Bugfix needed for hashcat
Hashcat needs MAX_GLOBAL_BUFFERS to be 21 or even 22 for some modes. It'll crash otherwise.
I'm adding an assert to see if programs need it to be even higher.

Signed-off-by: Christian Inci <chris.bugsfd@broke-the-inter.net>
[Handle first properly; should be NFC, since clover always uses first == 0.]
Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-12-22 17:11:43 +01:00
Nicolai Hähnle eca57f85ee radeonsi: fix gl_ClipDistance and gl_ClipVertex for points
The clipper hardware doesn't consider points as primitives that can be
clipped. Simply setting the corresponding cull bits works, and should not
have an adverse effect on other primitive types according to the hardware
team.

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Edward O'Callaghan <funfunctor@folklore1984.net>
2016-12-22 16:59:58 +01:00
Nicolai Hähnle 3778a10d37 radeonsi: only set VS_OUT_MISC_SIDE_BUS_ENA when the misc vector is used
Should have no effect (other than perhaps on power consumption), but
Vulkan does this.

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Edward O'Callaghan <funfunctor@folklore1984.net>
2016-12-22 16:58:53 +01:00
Vinson Lee ede8c02ab0 llvmpipe: Link tests with CLOCK_LIB.
Fix linking error with 'make check'.

  CXXLD  lp_test_format
../../../../src/gallium/auxiliary/.libs/libgallium.a(os_time.o): In function `os_time_get_nano':
src/gallium/auxiliary/os/os_time.c:59: undefined reference to `clock_gettime'

Signed-off-by: Vinson Lee <vlee@freedesktop.org>
2016-12-21 17:23:05 -08:00
Fredrik Höglund 27a8aab882 radv: fix dual source blending
Add the index to the location when assigning driver locations for
output variables.

Otherwise two fragment shader outputs declared as:

   layout (location = 0, index = 0) out vec4 output1;
   layout (location = 0, index = 1) out vec4 output2;

will end up aliasing one another.

Note that this patch will make the second output variable in the above
example alias a possible third output variable with location = 1 and
index = 0. But this shouldn't be a problem in practice since only one
color attachment is supported when dual-source blending is used.

Cc: "13.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
2016-12-22 02:07:17 +01:00
Dave Airlie 877202b6dc radv: enable shaderStorageImageExtendedFormats
This passes all the CTS tests that get enabled for this.

Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2016-12-22 10:29:15 +10:00
Dave Airlie a3ca2a9b7b radv: enable shaderGatherImageExtended
Thanks to Ilia's patch this works fine on radv.

No regressions in CTS, all enabled tests pass.

Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2016-12-22 09:48:18 +10:00
Dave Airlie 56020c7a7c radv/image: only touch queue family info for concurrent images.
The spec says to ignore these fields for exclusive images.

Fixes crashes in:
dEQP-VK.clipping.*

Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2016-12-21 23:33:04 +00:00
Dave Airlie 9d23b8a18e radv: flush smem for uniform buffer bit.
(cc'ing stable as I'd like to backport the ubo speedup as well)

Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Cc: "13.0" <mesa-stable@lists.freedesktop.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2016-12-21 22:31:14 +00:00
Junwei Zhang 13ae47234a radeonsi: add Polaris12 PCI ID
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Signed-off-by: Junwei Zhang <Jerry.Zhang@amd.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-12-21 15:10:54 -05:00
Junwei Zhang 018ead4266 radeonsi: add Polaris12 support (v3)
v2: use gfxip names for llvm 4.0+
v3: use tonga for llvm <= 3.8, drop gfxip name,
we can just change that we change the other asics.

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Signed-off-by: Junwei Zhang <Jerry.Zhang@amd.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
2016-12-21 15:10:03 -05:00
Ian Romanick 15c8f322ca glsl: Eliminate the open-coded version of process_block_array_leaf
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
2016-12-21 10:24:45 -08:00
Juan A. Suarez Romero 415f5f09e3 ttn: handle GLSL_SAMPLER_DIM_SUBPASS_MS case
Fixes a warning.

Reviewed-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
2016-12-21 12:44:25 +01:00
Juan A. Suarez Romero c32a9ec5f5 i965: allow unsourced enabled VAO
The GL 4.5 spec says:
    "If any enabled array’s buffer binding is zero when DrawArrays
    or one of the other drawing commands defined in section 10.4 is
    called, the result is undefined."

This commits avoids crashing the code, which is not a very good
"undefined result".

This fixes spec/!opengl 3.1/vao-broken-attrib piglit test.
2016-12-21 12:37:22 +01:00
Edward O'Callaghan 8801734da7 svga: Fix a strict-aliasing violation in shader dumper
As per the C spec, it is illegal to alias pointers to different
types. This results in undefined behaviour after optimization
passes, resulting in very subtle bugs that happen only on a
full moon..

Use a memcpy() as a well defined coercion between the isomorphic
bit-field interpretations of memory.

V.2: Use C99 compat STATIC_ASSERT() over C11 static_assert().

Signed-off-by: Edward O'Callaghan <funfunctor@folklore1984.net>
Reviewed-by: Charmaine Lee <charmainel@vmware.com>
2016-12-21 15:00:21 +11:00
Roland Scheidegger e827d91756 draw: use SoA fetch, not AoS one
Now that there's some SoA fetch which never falls back, we should always get
results which are better or at least not worse (something like rgba32f will
stay the same).

For cases which get way better, think something like R16_UNORM with 8-wide
vectors: this was 8 sign-extend fetches, 8 cvt, 8 muls, followed by
a couple of shuffles to stitch things together (if it is smart enough,
6 unpacks) and then a (8-wide) transpose (not sure if llvm could even
optimize the shuffles + transpose, since the 16bit values were actually
sign-extended to 128bit before being cast to a float vec, so that would be
another 8 unpacks). Now that is just 8 fetches (directly inserted into
vector, albeit there's one 128bit insert needed), 1 cvt, 1 mul.

v2: ditch the old AoS code instead of just disabling it.

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
2016-12-21 04:48:24 +01:00
Roland Scheidegger cb81460dcc gallivm: generalize the compressed format soa fetch a bit
This can now handle rgtc (unorm) too - this path no longer handles plain
formats, but that's unnecessary they now all have their proper SoA unpack
(this will still be dog-slow though due to the actual fetch being per-pixel
util fallbacks).

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
2016-12-21 04:48:24 +01:00
Roland Scheidegger 3c98e3cd63 gallivm: provide soa fetch path handling formats with more than 32bit
This previously always fell back to AoS conversion. Even for 4-float formats
(which is the optimal case by far for that fallback case) this was suboptimal,
since it meant the conversion couldn't be done with 256bit vectors. While this
may still only be partly possible for some formats, (unless there's AVX2
support) at least the transpose can be done with half the unpacks
(and before using the transpose for AoS fallbacks, it was worse still).
With less than 4 channels, things got way worse with the AoS fallback
quickly even with 128bit vectors.
The strategy is pretty much the same as the existing one for formats
which fit into 32 bits, except there's now multiple vectors to be
fetched (2 or 4 to be exact), which need to be shuffled first (if it's 4
vectors, this amounts to a transpose, for 2 it's a bit different),
then the unpack is done the same (with the exception that the shift
of the channels is now modulo 32, and we need to select the right
vector).
In fact the most complex part about it is to get the shuffles right
for separating into lo/hi parts for AVX/AVX2...
This also makes use of the new ability of gather to use provided type
information, which we abuse to outsmart llvm so we get decent shuffles,
and to fetch 3x32bit vectors without having to ZExt the scalar.
And just because we can, we handle double formats too, albeit they are
a bit different (draw sometimes needs to handle that).
v2: fix typo float/int bug (generating inefficient code).

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
2016-12-21 04:48:24 +01:00
Roland Scheidegger 8bd67a35c5 gallivm: optimize gather a bit, by using supplied destination type
By using a dst_type in the the gather interface, gather has some more
knowledge about how values should be fetched.
E.g. if this is a 3x32bit fetch and dst_type is 4x32bit vector gather
will no longer do a ZExt with a 96bit scalar value to 128bit, but
just fetch the 96bit as 3x32bit vector (this is still going to be
2 loads of course, but the loads can be done directly to simd vector
that way).
Also, we can now do some try to use the right int/float type. This should
make no difference really since there's typically no domain transition
penalties for such simd loads, however it actually makes a difference
since llvm will use different shuffle lowering afterwards so the caller
can use this to trick llvm into using sane shuffle afterwards (and yes
llvm is really stupid there - nothing against using the shuffle
instruction from the correct domain, but not at the cost of doing 3 times
more shuffles, the case which actually matters is refusal to use shufps
for integer values).
Also do some attempt to avoid things which look great on paper but llvm
doesn't really handle (e.g. fetching 3-element 8 bit and 16 bit vectors
which is simply disastrous - I suspect type legalizer is to blame trying
to extend these vectors to 128bit types somehow, so fetching these with
scalars like before which is suboptimal due to the ZExt).

Remove the ability for truncation (no point, this is gather, not conversion)
as it is complex enough already.

While here also implement not just the float, but also the 64bit avx2
gathers (disabled though since based on the theoretical numbers the benefit
just isn't there at all until Skylake at least).

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
2016-12-21 04:48:24 +01:00
Roland Scheidegger 5b950319ce gallivm: optimize SoA AoS fallback fetch path a little
We should do transpose, not extract/insert, at least with "sufficient" amount
of channels (for 4 channels, extract/insert shuffles generated otherwise look
truly terrifying). Albeit we shouldn't fallback to that so often in any case.
v2: ditch the extract/insert path, not worth keeping (we're going to avoid
hitting the fallback that often with future patches).

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
2016-12-21 04:48:24 +01:00
Roland Scheidegger d7d23aee4b gallivm: (trivial) handle non-aligned fetch for lp_build_fetch_rgba_soa
soa fetch so far always assumed that data was aligned. However, we want to
use this for vertex fetch, and data might not be aligned there, so handle
it in this path too (basically just pass through alignment through to other
functions). (It looks like it wouldn't work for for cached s3tc but this is
no different than with AoS fetch.)

Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
2016-12-21 04:48:24 +01:00
Axel Davy 123e947228 st/nine: Upload on secondary context for Draw*Up
Avoid synchronization by using the secondary context
for uploading the vertex data for Draw*Up.

v2: Rely on u_upload_mgr to use persistent coherent
buffers. Do not flush.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00
Axel Davy 0ec4e5f630 st/nine: Dirty MANAGED buffers at Lock time
Tests suggest MANAGED buffers are made dirty
at Lock time, not at Unlock time.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00
Axel Davy bad7f7cc63 st/nine: Implement new buffer upload path
This new buffer upload path enables to lock
faster than the normal path when using
DISCARD/NOOVERWRITE.

v2: Diverse cleanups and fixes.
v3: Fix allocation size for 'lone' buffers and
add more debug info.
v4: Rewrite of the path to handle when DISCARD/NOOVERWRITE
is not used anymore. The resource content is copied to the
new resource used.
v5: flush for safety after unmap (not sure it is really required
here, but safer to flush).
v6: Do not use the path if persistent coherent mapping is unavailable.
Fix buffer creation flags.
v7: Do not flush since it is not needed.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00
Axel Davy 8960be0e93 st/nine: Allow non-zero resource offset for vertex buffers
Next patches will introduce an offset.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00
Axel Davy 1e64be6f91 st/nine: Do not wait for DEFAULT lock for volumes when we can
If the volumes (and the texture container) are not referenced,
then they are no pending operations on them. We can lock directly.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00
Axel Davy b4f16615ef st/nine: Do not wait for DEFAULT lock for surfaces when we can
If the surfaces (and the texture container) are not referenced,
then they are no pending operations on them. We can lock directly.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00
Axel Davy 525a1b292a st/nine: Add arguments to context's blit and copy_region
The new arguments enable to reference the objects while
the function hasn't run.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00
Axel Davy 325324c749 st/nine: Idem for nine_context_gen_mipmap
Will enable to use the bind count as an information for
whether the surface/volume is used in the worker thread.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00
Axel Davy 7089d88199 st/nine: Bind destination for surface/volume uploads
Will enable to use the bind count as an information for
whether the surface/volume is used in the worker thread.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00
Axel Davy d4a9b21feb st/nine: Use nine_context_box_upload for volumes
Use nine_context_box_upload for uploads:
. systemmem volume to default volume
. managed volume internal content to its resource.

Check the uploads are executed before any action
that can alter the data, that is LockBox and
volume destruction.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00
Axel Davy f042639231 st/nine: Fix leak with volume dtor
The last level was not released.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00
Axel Davy 76e392d852 st/nine: Fix leak with cubetexture dtor
The last level was not released.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00
Axel Davy fec0b7f067 st/nine: Use nine_context_box_upload for surfaces
Use nine_context_box_upload for uploads:
. systemmem surface to default surface
. managed surface internal content to its resource.

Check the uploads are executed before any action
that can alter the data, that is LockRect,
NineSurface9_CopyDefaultToMem and surface destruction.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00
Axel Davy c873a2bd0c st/nine: Implement nine_context_box_upload
This function will be used for surface and volume uploads

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00
Axel Davy cadc7a5d94 st/nine: Use nine_context_gen_mipmap in BaseTexture9
Generate mipmaps in the worker thread.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00
Axel Davy 8d3e0f2187 st/nine: Implement nine_context_gen_mipmap
To offload mipmap generation as well.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00
Axel Davy 16b6fb65ae st/nine: Optimize managed buffer upload
Do the upload in the other thread.

Usually managed buffers are used once per frame.
It is then very likely pending_upload is 0 at Lock
time.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00
Axel Davy a78b5f4378 st/nine: Implement nine_context_range_upload
Will be used to upload buffers.

Signed-off-by: Axel Davy <axel.davy@ens.fr>
2016-12-20 23:47:08 +01:00