mesa/src/intel/compiler/brw_fs_saturate_propagation...

165 lines
6.0 KiB
C++
Raw Normal View History

/*
* Copyright © 2013 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*/
#include "brw_fs.h"
#include "brw_fs_live_variables.h"
#include "brw_cfg.h"
/** @file brw_fs_saturate_propagation.cpp
*
* Implements a pass that propagates the SAT modifier from a MOV.SAT into the
* instruction that produced the source of the MOV.SAT, thereby allowing the
* MOV's src and dst to be coalesced and the MOV removed.
*
* For instance,
*
* ADD tmp, src0, src1
* MOV.SAT dst, tmp
*
* would be transformed into
*
* ADD.SAT tmp, src0, src1
* MOV dst, tmp
*/
static bool
intel/compiler: split is_partial_write() into two variants This function is used in two different scenarios that for 32-bit instructions are the same, but for 16-bit instructions are not. One scenario is that in which we are working at a SIMD8 register level and we need to know if a register is fully defined or written. This is useful, for example, in the context of liveness analysis or register allocation, where we work with units of registers. The other scenario is that in which we want to know if an instruction is writing a full scalar component or just some subset of it. This is useful, for example, in the context of some optimization passes like copy propagation. For 32-bit instructions (or larger), a SIMD8 dispatch will always write at least a full SIMD8 register (32B) if the write is not partial. The function is_partial_write() checks this to determine if we have a partial write. However, when we deal with 16-bit instructions, that logic disables some optimizations that should be safe. For example, a SIMD8 16-bit MOV will only update half of a SIMD register, but it is still a complete write of the variable for a SIMD8 dispatch, so we should not prevent copy propagation in this scenario because we don't write all 32 bytes in the SIMD register or because the write starts at offset 16B (wehere we pack components Y or W of 16-bit vectors). This is a problem for SIMD8 executions (VS, TCS, TES, GS) of 16-bit instructions, which lose a number of optimizations because of this, most important of which is copy-propagation. This patch splits is_partial_write() into is_partial_reg_write(), which represents the current is_partial_write(), useful for things like liveness analysis, and is_partial_var_write(), which considers the dispatch size to check if we are writing a full variable (rather than a full register) to decide if the write is partial or not, which is what we really want in many optimization passes. Then the patch goes on and rewrites all uses of is_partial_write() to use one or the other version. Specifically, we use is_partial_var_write() in the following places: copy propagation, cmod propagation, common subexpression elimination, saturate propagation and sel peephole. Notice that the semantics of is_partial_var_write() exactly match the current implementation of is_partial_write() for anything that is 32-bit or larger, so no changes are expected for 32-bit instructions. Tested against ~5000 tests involving 16-bit instructions in CTS produced the following changes in instruction counts: Patched | Master | % | ================================================ SIMD8 | 621,900 | 706,721 | -12.00% | ================================================ SIMD16 | 93,252 | 93,252 | 0.00% | ================================================ As expected, the change only affects SIMD8 dispatches. Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
2018-07-10 08:52:46 +01:00
opt_saturate_propagation_local(fs_visitor *v, bblock_t *block,
unsigned dispatch_width)
{
bool progress = false;
int ip = block->end_ip + 1;
foreach_inst_in_block_reverse(fs_inst, inst, block) {
ip--;
if (inst->opcode != BRW_OPCODE_MOV ||
!inst->saturate ||
inst->dst.file != VGRF ||
inst->dst.type != inst->src[0].type ||
inst->src[0].file != VGRF ||
inst->src[0].abs)
continue;
int src_var = v->live_intervals->var_from_reg(inst->src[0]);
int src_end_ip = v->live_intervals->end[src_var];
bool interfered = false;
foreach_inst_in_block_reverse_starting_from(fs_inst, scan_inst, inst) {
if (scan_inst->exec_size == inst->exec_size &&
regions_overlap(scan_inst->dst, scan_inst->size_written,
inst->src[0], inst->size_read(0))) {
intel/compiler: split is_partial_write() into two variants This function is used in two different scenarios that for 32-bit instructions are the same, but for 16-bit instructions are not. One scenario is that in which we are working at a SIMD8 register level and we need to know if a register is fully defined or written. This is useful, for example, in the context of liveness analysis or register allocation, where we work with units of registers. The other scenario is that in which we want to know if an instruction is writing a full scalar component or just some subset of it. This is useful, for example, in the context of some optimization passes like copy propagation. For 32-bit instructions (or larger), a SIMD8 dispatch will always write at least a full SIMD8 register (32B) if the write is not partial. The function is_partial_write() checks this to determine if we have a partial write. However, when we deal with 16-bit instructions, that logic disables some optimizations that should be safe. For example, a SIMD8 16-bit MOV will only update half of a SIMD register, but it is still a complete write of the variable for a SIMD8 dispatch, so we should not prevent copy propagation in this scenario because we don't write all 32 bytes in the SIMD register or because the write starts at offset 16B (wehere we pack components Y or W of 16-bit vectors). This is a problem for SIMD8 executions (VS, TCS, TES, GS) of 16-bit instructions, which lose a number of optimizations because of this, most important of which is copy-propagation. This patch splits is_partial_write() into is_partial_reg_write(), which represents the current is_partial_write(), useful for things like liveness analysis, and is_partial_var_write(), which considers the dispatch size to check if we are writing a full variable (rather than a full register) to decide if the write is partial or not, which is what we really want in many optimization passes. Then the patch goes on and rewrites all uses of is_partial_write() to use one or the other version. Specifically, we use is_partial_var_write() in the following places: copy propagation, cmod propagation, common subexpression elimination, saturate propagation and sel peephole. Notice that the semantics of is_partial_var_write() exactly match the current implementation of is_partial_write() for anything that is 32-bit or larger, so no changes are expected for 32-bit instructions. Tested against ~5000 tests involving 16-bit instructions in CTS produced the following changes in instruction counts: Patched | Master | % | ================================================ SIMD8 | 621,900 | 706,721 | -12.00% | ================================================ SIMD16 | 93,252 | 93,252 | 0.00% | ================================================ As expected, the change only affects SIMD8 dispatches. Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
2018-07-10 08:52:46 +01:00
if (scan_inst->is_partial_var_write(dispatch_width) ||
(scan_inst->dst.type != inst->dst.type &&
!scan_inst->can_change_types()))
break;
if (scan_inst->saturate) {
inst->saturate = false;
progress = true;
} else if (src_end_ip == ip || inst->dst.equals(inst->src[0])) {
if (scan_inst->can_do_saturate()) {
if (scan_inst->dst.type != inst->dst.type) {
scan_inst->dst.type = inst->dst.type;
for (int i = 0; i < scan_inst->sources; i++) {
scan_inst->src[i].type = inst->dst.type;
}
}
if (inst->src[0].negate) {
if (scan_inst->opcode == BRW_OPCODE_MUL) {
scan_inst->src[0].negate = !scan_inst->src[0].negate;
inst->src[0].negate = false;
} else if (scan_inst->opcode == BRW_OPCODE_MAD) {
for (int i = 0; i < 2; i++) {
if (scan_inst->src[i].file == IMM) {
brw_negate_immediate(scan_inst->src[i].type,
&scan_inst->src[i].as_brw_reg());
} else {
scan_inst->src[i].negate = !scan_inst->src[i].negate;
}
}
inst->src[0].negate = false;
} else if (scan_inst->opcode == BRW_OPCODE_ADD) {
if (scan_inst->src[1].file == IMM) {
if (!brw_negate_immediate(scan_inst->src[1].type,
&scan_inst->src[1].as_brw_reg())) {
break;
}
} else {
scan_inst->src[1].negate = !scan_inst->src[1].negate;
}
scan_inst->src[0].negate = !scan_inst->src[0].negate;
inst->src[0].negate = false;
} else {
break;
}
}
scan_inst->saturate = true;
inst->saturate = false;
progress = true;
}
}
break;
}
for (int i = 0; i < scan_inst->sources; i++) {
if (scan_inst->src[i].file == VGRF &&
scan_inst->src[i].nr == inst->src[0].nr &&
i965/fs: Replace fs_reg::reg_offset with fs_reg::offset expressed in bytes. The fs_reg::offset field in byte units introduced in this patch is a more straightforward alternative to the current register offset representation split between fs_reg::reg_offset and ::subreg_offset. The split representation makes it too easy to forget about one of the offsets while dealing with the other, which has led to multiple back-end bugs in the past. To make the matter worse the unit reg_offset was expressed in was rather inconsistent, for uniforms it would be expressed in either 4B or 16B units depending on the back-end, and for most other things it would be expressed in 32B units. This encodes reg_offset as a new offset field expressed consistently in byte units. Each rvalue reference of reg_offset in existing code like 'x = r.reg_offset' is rewritten to 'x = r.offset / reg_unit', and each lvalue reference like 'r.reg_offset = x' is rewritten to 'r.offset = r.offset % reg_unit + x * reg_unit'. Because the change affects a lot of places and is rather non-trivial to verify due to the inconsistent value of reg_unit, I've tried to avoid making any additional changes other than applying the rewrite rule above in order to keep the patch as simple as possible, sometimes at the cost of introducing obvious stupidity (e.g. algebraic expressions that could be simplified given some knowledge of the context) -- I'll clean those up later on in a second pass. Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
2016-09-01 20:42:20 +01:00
scan_inst->src[i].offset / REG_SIZE ==
inst->src[0].offset / REG_SIZE) {
if (scan_inst->opcode != BRW_OPCODE_MOV ||
!scan_inst->saturate ||
scan_inst->src[0].abs ||
scan_inst->src[0].negate ||
scan_inst->src[0].abs != inst->src[0].abs ||
scan_inst->src[0].negate != inst->src[0].negate) {
interfered = true;
break;
}
}
}
if (interfered)
break;
}
}
return progress;
}
bool
fs_visitor::opt_saturate_propagation()
{
bool progress = false;
calculate_live_intervals();
foreach_block (block, cfg) {
intel/compiler: split is_partial_write() into two variants This function is used in two different scenarios that for 32-bit instructions are the same, but for 16-bit instructions are not. One scenario is that in which we are working at a SIMD8 register level and we need to know if a register is fully defined or written. This is useful, for example, in the context of liveness analysis or register allocation, where we work with units of registers. The other scenario is that in which we want to know if an instruction is writing a full scalar component or just some subset of it. This is useful, for example, in the context of some optimization passes like copy propagation. For 32-bit instructions (or larger), a SIMD8 dispatch will always write at least a full SIMD8 register (32B) if the write is not partial. The function is_partial_write() checks this to determine if we have a partial write. However, when we deal with 16-bit instructions, that logic disables some optimizations that should be safe. For example, a SIMD8 16-bit MOV will only update half of a SIMD register, but it is still a complete write of the variable for a SIMD8 dispatch, so we should not prevent copy propagation in this scenario because we don't write all 32 bytes in the SIMD register or because the write starts at offset 16B (wehere we pack components Y or W of 16-bit vectors). This is a problem for SIMD8 executions (VS, TCS, TES, GS) of 16-bit instructions, which lose a number of optimizations because of this, most important of which is copy-propagation. This patch splits is_partial_write() into is_partial_reg_write(), which represents the current is_partial_write(), useful for things like liveness analysis, and is_partial_var_write(), which considers the dispatch size to check if we are writing a full variable (rather than a full register) to decide if the write is partial or not, which is what we really want in many optimization passes. Then the patch goes on and rewrites all uses of is_partial_write() to use one or the other version. Specifically, we use is_partial_var_write() in the following places: copy propagation, cmod propagation, common subexpression elimination, saturate propagation and sel peephole. Notice that the semantics of is_partial_var_write() exactly match the current implementation of is_partial_write() for anything that is 32-bit or larger, so no changes are expected for 32-bit instructions. Tested against ~5000 tests involving 16-bit instructions in CTS produced the following changes in instruction counts: Patched | Master | % | ================================================ SIMD8 | 621,900 | 706,721 | -12.00% | ================================================ SIMD16 | 93,252 | 93,252 | 0.00% | ================================================ As expected, the change only affects SIMD8 dispatches. Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
2018-07-10 08:52:46 +01:00
progress = opt_saturate_propagation_local(this, block, dispatch_width) || progress;
}
/* Live intervals are still valid. */
return progress;
}