util/set: Add specialized resizing add function

A significant portion of the time spent in nir_opt_cse for the Dolphin
ubershaders was in resizing the set. When resizing a hash table, we know
in advance that each new element to be inserted will be different from
every other element, so we don't have to compare them, and there will be
no tombstone elements, so we don't have to worry about caching the
first-seen tombstone. We add a specialized add function which skips
these steps entirely, speeding up resizing.

Compile-time results from my shader-db database:

Difference at 95.0% confidence
	-2.29143 +/- 0.845534
	-0.529475% +/- 0.194767%
	(Student's t, pooled s = 1.08807)

Reviewed-by: Eric Anholt <eric@anholt.net>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
This commit is contained in:
Connor Abbott 2019-05-20 14:59:40 +02:00
parent 451211741c
commit 6f9beb28bb
1 changed files with 23 additions and 3 deletions

View File

@ -245,8 +245,26 @@ _mesa_set_search_pre_hashed(const struct set *set, uint32_t hash,
return set_search(set, hash, key);
}
static struct set_entry *
set_add(struct set *ht, uint32_t hash, const void *key);
static void
set_add_rehash(struct set *ht, uint32_t hash, const void *key)
{
uint32_t size = ht->size;
uint32_t start_address = hash % size;
uint32_t double_hash = hash % ht->rehash + 1;
uint32_t hash_address = start_address;
do {
struct set_entry *entry = ht->table + hash_address;
if (likely(entry->key == NULL)) {
entry->hash = hash;
entry->key = key;
return;
}
hash_address = hash_address + double_hash;
if (hash_address >= size)
hash_address -= size;
} while (true);
}
static void
set_rehash(struct set *ht, unsigned new_size_index)
@ -273,9 +291,11 @@ set_rehash(struct set *ht, unsigned new_size_index)
ht->deleted_entries = 0;
set_foreach(&old_ht, entry) {
set_add(ht, entry->hash, entry->key);
set_add_rehash(ht, entry->hash, entry->key);
}
ht->entries = old_ht.entries;
ralloc_free(old_ht.table);
}