I'm going to throw out an idea that I had a long time ago for my old chess engine. I never got around to testing how much gain I could get, because the engine was so buggy. Maybe someone will get some use out of it.
The idea is called Progressive Granulation. Some people have a type of granularity in their evals--this is what this is based on. Suppose you took your evaluation and rounded it to the nearest pawn. You would get basically the same search, but any set of scores within a pawn could be chosen as best. In other words, it is a guess at the "true" evaluation with an error of half a pawn. This has the advantage that there are more beta cutoffs due to scores being exactly equal to beta.
What progressive granulation does is, for each iteration, make an initial search with a very coarse granularity. That way we can get a quick idea of moves that lose or win instantly. Sort of like a material-only search, but still incorporating the other eval terms. Then you can extend these search results, via the hash table, to save work in subsequent finer-grained searches. This can be done by rounding alpha and beta with the same granularity as the old search and seeing if the old score is still good enough.
So you can have multiple levels of granularity, each improving the accuracy of the main search. A table like this worked well: {50, 10, 1 }. This means that the score uses an error of 1/4 pawn, 1/20 pawn, and 1/200 of a pawn (the finest possible using centipawns).