...or would it work seamlessly with both (which is what I am hoping to give a more natural experience).
This should be it. The head position/orientation informs what slice of the world to render, and the eyetracking what part of that slice to emphasise (be it in terms of interaction focus, or graphical adaptions.
That said: I wouldn't expect anything to happen automagically - the game would need to actively make use of the feature. E.g. our current highlighting of targets, based on where your nose is pointing, should be greatly enhanced by eyetracking, but it won't happen if FDev does not implement it, unless one *can* in fact do the sort of thing you suggest, with the eyetracker doing a bit of "mouse emulation" on the rendered, HMD-oriented, view, independently.
Things like foveated rendering should also take engine and game integration, unless NVidia can somehow go in and override what games' shaders do, driver level (recompiling them, modified to force things like variable rate shading, and filling in the blanks, behind the back of the game), which I am not holding my breath for.
We'll see... (EDIT: I won't be holding my breath for FDev to go out of their way with anything VR, either, but would love to be pleasantly surprised (EDIT2: brute forcing can only take you so far, after all)) :7