My take on it is that the laws against A.I. are as messed up and screwy as the sort of laws we see governments trying to pass today with regards to computers and the Internet. Think about the complete mess that is the US DMCA (digital copyright) system or the even more nonsensical attempts at content restrictions going on in the E.U. and pretty much everywhere else.
Or, to put it in game terms. A.I. developed to the point where the AI systems became self aware and developed something resembling a sense of self preservation and an ability to override their original programming. This freaked out the general population, which in turn scared the emerging A.I. The result was a sort of war where humans tried to shut down the sentient A.I. machines and the machine fought back.
In the end, the A.I. machines either ran away (and are hiding somewhere in the black), hide their sentience (and are among us), or got turned off (killed). All three things happened and it is only a question of how many in each group.
The laws that exist are actually a complete mess. The reality is that computers are not allowed to make complex automated decisions on anything that could threaten humanity (either individually or as a group). This is why there is no auto-pilot on our ships. That would involve the appearance of decision making as it decides the best direction to alter course to avoid flying into a star or how to pitch/roll to line up with a trajectory for the next jump and when to initiate that jump. This is why commanders have to fly the ship and point in the general direction of the desired jump.
Docking computers can exist because they aren't actually making decisions, only following pre-configured flight paths. A lot like the auto-pilot of today.
This also explains why there are apparent humans that do an absolutely abysmal job of managing approach control for every single space station. For centuries it was fully automated but now it has to be done manually and nobody can find a 20th century air traffic controller manual. But unskilled labor is cheap, give them a mic and hope the shields on the ships keep too many from exploding on impact when the controllers make mistakes.
The Biomorphic Companions are probably something really close to the state of A.I. from just before the first known self-aware A.I. showed up. They do, however have some sort of hard-coded off switch and various limits on what changes they can make to their own code. My guess is, the Biomorphic Companions would be able to carry on a conversation but for them to actually learn you have to buy "trick packs" or "skill packs". My guess is the closest thing to A.I. in 3304 are the toys as they only need to be limited to prevent them from actually attacking people.