Yes and no.
That's the thing with "sentience" - you can analyze your own programming and change it. We've come a long way in that (genetics, chemistry) - but still much road ahead. (not really sure we should go there, but that's another topic .. maybe AIs are not the worst threat

)
intelligence doesn't have to be sentient. turing was really onto something but couldn't escape the narrow homocentric view at the time. so his test may be quite useless today but in considering intelligence a replicable behaviour he was pointing in the right direction: there actually isn't such thing as intelligence, but intelligences.
It's semantics. There is such a thing as "human intelligence" .. or "the way humans are programmed" .. which, like I wrote some pages ago, conceptually stems from "understanding", which is more than just "computing".
There is 2 things in this universe "icecream" and "not icecream" - is a "valid definition" of "things".

Maybe there's just one "intelligence" and it's not determined by how fast you can compute data or take decisions (for "out of the box decisionmaking" and "wisdom", see King Solomon) but rather on the level of "understanding" something is capable of. You can't "teach" a spider it's not a valid place to rebuilt it's nets by destroying them over and over. You can teach a mouse to push a button (buttons not being part of their genetic programming) for some food.
It's "decisionmaking", but not really "intelligence". The computer does not "understand" the stock market, it just crunches large number of data to make a decision - which might be catastrophic in the long run and can crash entire economies, so there's plans to ban such trading.